00:00:00.001 Started by upstream project "autotest-per-patch" build number 132533 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.269 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.269 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.653 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.665 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.676 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.676 > git config core.sparsecheckout # timeout=10 00:00:04.687 > git read-tree -mu HEAD # timeout=10 00:00:04.702 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.723 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.724 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.799 [Pipeline] Start of Pipeline 00:00:04.812 [Pipeline] library 00:00:04.814 Loading library shm_lib@master 00:00:04.815 Library shm_lib@master is cached. Copying from home. 00:00:04.829 [Pipeline] node 00:00:04.838 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.839 [Pipeline] { 00:00:04.850 [Pipeline] catchError 00:00:04.852 [Pipeline] { 00:00:04.864 [Pipeline] wrap 00:00:04.869 [Pipeline] { 00:00:04.875 [Pipeline] stage 00:00:04.877 [Pipeline] { (Prologue) 00:00:05.099 [Pipeline] sh 00:00:05.387 + logger -p user.info -t JENKINS-CI 00:00:05.405 [Pipeline] echo 00:00:05.406 Node: CYP9 00:00:05.413 [Pipeline] sh 00:00:05.712 [Pipeline] setCustomBuildProperty 00:00:05.724 [Pipeline] echo 00:00:05.726 Cleanup processes 00:00:05.731 [Pipeline] sh 00:00:06.045 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.045 2610956 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.060 [Pipeline] sh 00:00:06.346 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.346 ++ grep -v 'sudo pgrep' 00:00:06.346 ++ awk '{print $1}' 00:00:06.346 + sudo kill -9 00:00:06.346 + true 00:00:06.365 [Pipeline] cleanWs 00:00:06.375 [WS-CLEANUP] Deleting project workspace... 00:00:06.375 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.382 [WS-CLEANUP] done 00:00:06.385 [Pipeline] setCustomBuildProperty 00:00:06.398 [Pipeline] sh 00:00:06.701 + sudo git config --global --replace-all safe.directory '*' 00:00:06.775 [Pipeline] httpRequest 00:00:07.149 [Pipeline] echo 00:00:07.151 Sorcerer 10.211.164.101 is alive 00:00:07.159 [Pipeline] retry 00:00:07.161 [Pipeline] { 00:00:07.172 [Pipeline] httpRequest 00:00:07.176 HttpMethod: GET 00:00:07.177 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.177 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.183 Response Code: HTTP/1.1 200 OK 00:00:07.183 Success: Status code 200 is in the accepted range: 200,404 00:00:07.184 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.922 [Pipeline] } 00:00:08.938 [Pipeline] // retry 00:00:08.944 [Pipeline] sh 00:00:09.226 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.244 [Pipeline] httpRequest 00:00:09.619 [Pipeline] echo 00:00:09.621 Sorcerer 10.211.164.101 is alive 00:00:09.631 [Pipeline] retry 00:00:09.633 [Pipeline] { 00:00:09.647 [Pipeline] httpRequest 00:00:09.652 HttpMethod: GET 00:00:09.652 URL: http://10.211.164.101/packages/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:00:09.653 Sending request to url: http://10.211.164.101/packages/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:00:09.666 Response Code: HTTP/1.1 200 OK 00:00:09.666 Success: Status code 200 is in the accepted range: 200,404 00:00:09.667 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:01:14.869 [Pipeline] } 00:01:14.887 [Pipeline] // retry 00:01:14.895 [Pipeline] sh 00:01:15.186 + tar --no-same-owner -xf spdk_afdec00e1724f79bc502355ac0ab5bdff6ad1504.tar.gz 00:01:18.504 [Pipeline] sh 00:01:18.793 + git -C spdk log --oneline -n5 00:01:18.793 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:01:18.793 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:01:18.793 971ec0126 bdevperf: Add hide_metadata option 00:01:18.793 894d5af2a bdevperf: Get metadata config by not bdev but bdev_desc 00:01:18.793 075fb5b8c bdevperf: Store the result of DIF type check into job structure 00:01:18.806 [Pipeline] } 00:01:18.820 [Pipeline] // stage 00:01:18.830 [Pipeline] stage 00:01:18.832 [Pipeline] { (Prepare) 00:01:18.850 [Pipeline] writeFile 00:01:18.867 [Pipeline] sh 00:01:19.155 + logger -p user.info -t JENKINS-CI 00:01:19.170 [Pipeline] sh 00:01:19.459 + logger -p user.info -t JENKINS-CI 00:01:19.472 [Pipeline] sh 00:01:19.759 + cat autorun-spdk.conf 00:01:19.759 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.759 SPDK_TEST_NVMF=1 00:01:19.759 SPDK_TEST_NVME_CLI=1 00:01:19.759 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.759 SPDK_TEST_NVMF_NICS=e810 00:01:19.759 SPDK_TEST_VFIOUSER=1 00:01:19.759 SPDK_RUN_UBSAN=1 00:01:19.759 NET_TYPE=phy 00:01:19.770 RUN_NIGHTLY=0 00:01:19.774 [Pipeline] readFile 00:01:19.797 [Pipeline] withEnv 00:01:19.799 [Pipeline] { 00:01:19.811 [Pipeline] sh 00:01:20.102 + set -ex 00:01:20.102 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.102 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.102 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.102 ++ SPDK_TEST_NVMF=1 00:01:20.102 ++ SPDK_TEST_NVME_CLI=1 00:01:20.102 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.102 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.102 ++ SPDK_TEST_VFIOUSER=1 00:01:20.102 ++ SPDK_RUN_UBSAN=1 00:01:20.102 ++ NET_TYPE=phy 00:01:20.102 ++ RUN_NIGHTLY=0 00:01:20.102 + case $SPDK_TEST_NVMF_NICS in 00:01:20.102 + DRIVERS=ice 00:01:20.102 + [[ tcp == \r\d\m\a ]] 00:01:20.102 + [[ -n ice ]] 00:01:20.102 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.102 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.102 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.102 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.102 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.102 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.102 + true 00:01:20.102 + for D in $DRIVERS 00:01:20.102 + sudo modprobe ice 00:01:20.102 + exit 0 00:01:20.112 [Pipeline] } 00:01:20.128 [Pipeline] // withEnv 00:01:20.133 [Pipeline] } 00:01:20.146 [Pipeline] // stage 00:01:20.155 [Pipeline] catchError 00:01:20.156 [Pipeline] { 00:01:20.169 [Pipeline] timeout 00:01:20.169 Timeout set to expire in 1 hr 0 min 00:01:20.171 [Pipeline] { 00:01:20.185 [Pipeline] stage 00:01:20.187 [Pipeline] { (Tests) 00:01:20.199 [Pipeline] sh 00:01:20.488 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.488 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.488 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.488 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.488 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.488 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.488 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.488 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.488 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.488 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.488 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.488 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.488 + source /etc/os-release 00:01:20.488 ++ NAME='Fedora Linux' 00:01:20.488 ++ VERSION='39 (Cloud Edition)' 00:01:20.488 ++ ID=fedora 00:01:20.488 ++ VERSION_ID=39 00:01:20.488 ++ VERSION_CODENAME= 00:01:20.488 ++ PLATFORM_ID=platform:f39 00:01:20.488 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.488 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.488 ++ LOGO=fedora-logo-icon 00:01:20.488 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.488 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.488 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.488 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.488 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.488 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.488 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.488 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.488 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.488 ++ SUPPORT_END=2024-11-12 00:01:20.488 ++ VARIANT='Cloud Edition' 00:01:20.488 ++ VARIANT_ID=cloud 00:01:20.488 + uname -a 00:01:20.488 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.488 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.791 Hugepages 00:01:23.791 node hugesize free / total 00:01:23.791 node0 1048576kB 0 / 0 00:01:23.791 node0 2048kB 0 / 0 00:01:23.791 node1 1048576kB 0 / 0 00:01:23.791 node1 2048kB 0 / 0 00:01:23.791 00:01:23.791 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.791 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:23.791 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:23.791 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:23.791 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:23.791 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:23.791 + rm -f /tmp/spdk-ld-path 00:01:23.791 + source autorun-spdk.conf 00:01:23.791 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.791 ++ SPDK_TEST_NVMF=1 00:01:23.791 ++ SPDK_TEST_NVME_CLI=1 00:01:23.791 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.791 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.791 ++ SPDK_TEST_VFIOUSER=1 00:01:23.791 ++ SPDK_RUN_UBSAN=1 00:01:23.791 ++ NET_TYPE=phy 00:01:23.791 ++ RUN_NIGHTLY=0 00:01:23.791 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.791 + [[ -n '' ]] 00:01:23.791 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.791 + for M in /var/spdk/build-*-manifest.txt 00:01:23.791 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.791 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.791 + for M in /var/spdk/build-*-manifest.txt 00:01:23.791 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.791 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.791 + for M in /var/spdk/build-*-manifest.txt 00:01:23.791 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.791 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.791 ++ uname 00:01:23.791 + [[ Linux == \L\i\n\u\x ]] 00:01:23.791 + sudo dmesg -T 00:01:23.791 + sudo dmesg --clear 00:01:23.791 + dmesg_pid=2612502 00:01:23.791 + [[ Fedora Linux == FreeBSD ]] 00:01:23.791 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.791 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.791 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.791 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.791 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:23.791 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.791 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.791 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.791 + sudo dmesg -Tw 00:01:23.791 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.791 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.791 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.791 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.791 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.791 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.791 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.791 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.791 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.053 18:51:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.053 18:51:41 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:24.053 18:51:41 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:24.053 18:51:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.053 18:51:41 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.053 18:51:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.053 18:51:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:24.053 18:51:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.053 18:51:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.053 18:51:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.053 18:51:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.053 18:51:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.053 18:51:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.053 18:51:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.053 18:51:41 -- paths/export.sh@5 -- $ export PATH 00:01:24.053 18:51:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.053 18:51:41 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:24.053 18:51:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.053 18:51:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732643501.XXXXXX 00:01:24.053 18:51:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732643501.VsfFxE 00:01:24.053 18:51:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.053 18:51:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.053 18:51:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:24.053 18:51:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.053 18:51:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.053 18:51:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.053 18:51:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.053 18:51:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.053 18:51:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:24.053 18:51:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.053 18:51:41 -- pm/common@17 -- $ local monitor 00:01:24.053 18:51:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.053 18:51:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.053 18:51:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.053 18:51:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.053 18:51:41 -- pm/common@21 -- $ date +%s 00:01:24.054 18:51:41 -- pm/common@21 -- $ date +%s 00:01:24.054 18:51:41 -- pm/common@25 -- $ sleep 1 00:01:24.054 18:51:41 -- pm/common@21 -- $ date +%s 00:01:24.054 18:51:41 -- pm/common@21 -- $ date +%s 00:01:24.054 18:51:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643501 00:01:24.054 18:51:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643501 00:01:24.054 18:51:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643501 00:01:24.054 18:51:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732643501 00:01:24.054 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643501_collect-cpu-load.pm.log 00:01:24.054 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643501_collect-vmstat.pm.log 00:01:24.054 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643501_collect-cpu-temp.pm.log 00:01:24.054 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732643501_collect-bmc-pm.bmc.pm.log 00:01:24.997 18:51:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.997 18:51:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.997 18:51:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.997 18:51:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.997 18:51:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.997 Tue Nov 26 05:51:42 PM UTC 2024 00:01:24.997 18:51:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.997 v25.01-pre-262-gafdec00e1 00:01:24.997 18:51:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.997 18:51:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.997 18:51:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.997 18:51:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.997 18:51:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.997 18:51:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.997 ************************************ 00:01:24.997 START TEST ubsan 00:01:24.997 ************************************ 00:01:24.997 18:51:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.997 using ubsan 00:01:24.997 00:01:24.997 real 0m0.001s 00:01:24.997 user 0m0.001s 00:01:24.997 sys 0m0.000s 00:01:24.997 18:51:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.997 18:51:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.997 ************************************ 00:01:24.997 END TEST ubsan 00:01:24.997 ************************************ 00:01:25.259 18:51:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.259 18:51:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.259 18:51:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.259 18:51:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:25.259 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.259 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.832 Using 'verbs' RDMA provider 00:01:41.692 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:53.927 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.530 Creating mk/config.mk...done. 00:01:54.530 Creating mk/cc.flags.mk...done. 00:01:54.530 Type 'make' to build. 00:01:54.530 18:52:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:54.530 18:52:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:54.530 18:52:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:54.530 18:52:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.530 ************************************ 00:01:54.530 START TEST make 00:01:54.530 ************************************ 00:01:54.530 18:52:11 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:55.101 make[1]: Nothing to be done for 'all'. 00:01:56.484 The Meson build system 00:01:56.484 Version: 1.5.0 00:01:56.484 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:56.484 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:56.484 Build type: native build 00:01:56.484 Project name: libvfio-user 00:01:56.484 Project version: 0.0.1 00:01:56.485 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.485 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.485 Host machine cpu family: x86_64 00:01:56.485 Host machine cpu: x86_64 00:01:56.485 Run-time dependency threads found: YES 00:01:56.485 Library dl found: YES 00:01:56.485 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.485 Run-time dependency json-c found: YES 0.17 00:01:56.485 Run-time dependency cmocka found: YES 1.1.7 00:01:56.485 Program pytest-3 found: NO 00:01:56.485 Program flake8 found: NO 00:01:56.485 Program misspell-fixer found: NO 00:01:56.485 Program restructuredtext-lint found: NO 00:01:56.485 Program valgrind found: YES (/usr/bin/valgrind) 00:01:56.485 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.485 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.485 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.485 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.485 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:56.485 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:56.485 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.485 Build targets in project: 8 00:01:56.485 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:56.485 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:56.485 00:01:56.485 libvfio-user 0.0.1 00:01:56.485 00:01:56.485 User defined options 00:01:56.485 buildtype : debug 00:01:56.485 default_library: shared 00:01:56.485 libdir : /usr/local/lib 00:01:56.485 00:01:56.485 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.743 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.005 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:57.005 [2/37] Compiling C object samples/null.p/null.c.o 00:01:57.005 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:57.005 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:57.005 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:57.005 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:57.005 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:57.005 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:57.005 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:57.005 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:57.005 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:57.005 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:57.005 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:57.005 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:57.005 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:57.005 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:57.005 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:57.005 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:57.005 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:57.005 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:57.005 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:57.005 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:57.005 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:57.005 [24/37] Compiling C object samples/server.p/server.c.o 00:01:57.005 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:57.005 [26/37] Compiling C object samples/client.p/client.c.o 00:01:57.005 [27/37] Linking target samples/client 00:01:57.005 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:57.005 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:57.005 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:57.005 [31/37] Linking target test/unit_tests 00:01:57.266 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:57.266 [33/37] Linking target samples/server 00:01:57.266 [34/37] Linking target samples/gpio-pci-idio-16 00:01:57.266 [35/37] Linking target samples/null 00:01:57.266 [36/37] Linking target samples/lspci 00:01:57.266 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:57.266 INFO: autodetecting backend as ninja 00:01:57.266 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.526 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.787 ninja: no work to do. 00:02:04.373 The Meson build system 00:02:04.373 Version: 1.5.0 00:02:04.373 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:04.373 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:04.373 Build type: native build 00:02:04.373 Program cat found: YES (/usr/bin/cat) 00:02:04.373 Project name: DPDK 00:02:04.373 Project version: 24.03.0 00:02:04.373 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:04.373 C linker for the host machine: cc ld.bfd 2.40-14 00:02:04.373 Host machine cpu family: x86_64 00:02:04.373 Host machine cpu: x86_64 00:02:04.373 Message: ## Building in Developer Mode ## 00:02:04.373 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.373 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.373 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.373 Program python3 found: YES (/usr/bin/python3) 00:02:04.373 Program cat found: YES (/usr/bin/cat) 00:02:04.373 Compiler for C supports arguments -march=native: YES 00:02:04.373 Checking for size of "void *" : 8 00:02:04.373 Checking for size of "void *" : 8 (cached) 00:02:04.373 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:04.373 Library m found: YES 00:02:04.373 Library numa found: YES 00:02:04.373 Has header "numaif.h" : YES 00:02:04.373 Library fdt found: NO 00:02:04.373 Library execinfo found: NO 00:02:04.373 Has header "execinfo.h" : YES 00:02:04.373 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.373 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.373 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.373 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.373 Run-time dependency openssl found: YES 3.1.1 00:02:04.373 Run-time dependency libpcap found: YES 1.10.4 00:02:04.373 Has header "pcap.h" with dependency libpcap: YES 00:02:04.373 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.373 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.373 Compiler for C supports arguments -Wformat: YES 00:02:04.373 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:04.373 Compiler for C supports arguments -Wformat-security: NO 00:02:04.373 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.373 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.373 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.373 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.373 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.373 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.373 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.373 Compiler for C supports arguments -Wundef: YES 00:02:04.373 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.373 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.373 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.373 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.373 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.373 Program objdump found: YES (/usr/bin/objdump) 00:02:04.373 Compiler for C supports arguments -mavx512f: YES 00:02:04.373 Checking if "AVX512 checking" compiles: YES 00:02:04.373 Fetching value of define "__SSE4_2__" : 1 00:02:04.373 Fetching value of define "__AES__" : 1 00:02:04.373 Fetching value of define "__AVX__" : 1 00:02:04.373 Fetching value of define "__AVX2__" : 1 00:02:04.373 Fetching value of define "__AVX512BW__" : 1 00:02:04.373 Fetching value of define "__AVX512CD__" : 1 00:02:04.373 Fetching value of define "__AVX512DQ__" : 1 00:02:04.373 Fetching value of define "__AVX512F__" : 1 00:02:04.373 Fetching value of define "__AVX512VL__" : 1 00:02:04.373 Fetching value of define "__PCLMUL__" : 1 00:02:04.373 Fetching value of define "__RDRND__" : 1 00:02:04.373 Fetching value of define "__RDSEED__" : 1 00:02:04.373 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:04.373 Fetching value of define "__znver1__" : (undefined) 00:02:04.374 Fetching value of define "__znver2__" : (undefined) 00:02:04.374 Fetching value of define "__znver3__" : (undefined) 00:02:04.374 Fetching value of define "__znver4__" : (undefined) 00:02:04.374 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.374 Message: lib/log: Defining dependency "log" 00:02:04.374 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.374 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.374 Checking for function "getentropy" : NO 00:02:04.374 Message: lib/eal: Defining dependency "eal" 00:02:04.374 Message: lib/ring: Defining dependency "ring" 00:02:04.374 Message: lib/rcu: Defining dependency "rcu" 00:02:04.374 Message: lib/mempool: Defining dependency "mempool" 00:02:04.374 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.374 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.374 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:04.374 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:04.374 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:04.374 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:04.374 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:04.374 Compiler for C supports arguments -mpclmul: YES 00:02:04.374 Compiler for C supports arguments -maes: YES 00:02:04.374 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.374 Compiler for C supports arguments -mavx512bw: YES 00:02:04.374 Compiler for C supports arguments -mavx512dq: YES 00:02:04.374 Compiler for C supports arguments -mavx512vl: YES 00:02:04.374 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.374 Compiler for C supports arguments -mavx2: YES 00:02:04.374 Compiler for C supports arguments -mavx: YES 00:02:04.374 Message: lib/net: Defining dependency "net" 00:02:04.374 Message: lib/meter: Defining dependency "meter" 00:02:04.374 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.374 Message: lib/pci: Defining dependency "pci" 00:02:04.374 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.374 Message: lib/hash: Defining dependency "hash" 00:02:04.374 Message: lib/timer: Defining dependency "timer" 00:02:04.374 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.374 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.374 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.374 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.374 Message: lib/power: Defining dependency "power" 00:02:04.374 Message: lib/reorder: Defining dependency "reorder" 00:02:04.374 Message: lib/security: Defining dependency "security" 00:02:04.374 Has header "linux/userfaultfd.h" : YES 00:02:04.374 Has header "linux/vduse.h" : YES 00:02:04.374 Message: lib/vhost: Defining dependency "vhost" 00:02:04.374 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.374 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.374 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.374 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.374 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.374 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.374 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.374 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.374 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.374 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.374 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:04.374 Configuring doxy-api-html.conf using configuration 00:02:04.374 Configuring doxy-api-man.conf using configuration 00:02:04.374 Program mandb found: YES (/usr/bin/mandb) 00:02:04.374 Program sphinx-build found: NO 00:02:04.374 Configuring rte_build_config.h using configuration 00:02:04.374 Message: 00:02:04.374 ================= 00:02:04.374 Applications Enabled 00:02:04.374 ================= 00:02:04.374 00:02:04.374 apps: 00:02:04.374 00:02:04.374 00:02:04.374 Message: 00:02:04.374 ================= 00:02:04.374 Libraries Enabled 00:02:04.374 ================= 00:02:04.374 00:02:04.374 libs: 00:02:04.374 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.374 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.374 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.374 00:02:04.374 Message: 00:02:04.374 =============== 00:02:04.374 Drivers Enabled 00:02:04.374 =============== 00:02:04.374 00:02:04.374 common: 00:02:04.374 00:02:04.374 bus: 00:02:04.374 pci, vdev, 00:02:04.374 mempool: 00:02:04.374 ring, 00:02:04.374 dma: 00:02:04.374 00:02:04.374 net: 00:02:04.374 00:02:04.374 crypto: 00:02:04.374 00:02:04.374 compress: 00:02:04.374 00:02:04.374 vdpa: 00:02:04.374 00:02:04.374 00:02:04.374 Message: 00:02:04.374 ================= 00:02:04.374 Content Skipped 00:02:04.374 ================= 00:02:04.374 00:02:04.374 apps: 00:02:04.374 dumpcap: explicitly disabled via build config 00:02:04.374 graph: explicitly disabled via build config 00:02:04.374 pdump: explicitly disabled via build config 00:02:04.374 proc-info: explicitly disabled via build config 00:02:04.374 test-acl: explicitly disabled via build config 00:02:04.374 test-bbdev: explicitly disabled via build config 00:02:04.374 test-cmdline: explicitly disabled via build config 00:02:04.374 test-compress-perf: explicitly disabled via build config 00:02:04.374 test-crypto-perf: explicitly disabled via build config 00:02:04.374 test-dma-perf: explicitly disabled via build config 00:02:04.374 test-eventdev: explicitly disabled via build config 00:02:04.374 test-fib: explicitly disabled via build config 00:02:04.374 test-flow-perf: explicitly disabled via build config 00:02:04.374 test-gpudev: explicitly disabled via build config 00:02:04.374 test-mldev: explicitly disabled via build config 00:02:04.374 test-pipeline: explicitly disabled via build config 00:02:04.374 test-pmd: explicitly disabled via build config 00:02:04.374 test-regex: explicitly disabled via build config 00:02:04.374 test-sad: explicitly disabled via build config 00:02:04.374 test-security-perf: explicitly disabled via build config 00:02:04.374 00:02:04.374 libs: 00:02:04.374 argparse: explicitly disabled via build config 00:02:04.374 metrics: explicitly disabled via build config 00:02:04.374 acl: explicitly disabled via build config 00:02:04.374 bbdev: explicitly disabled via build config 00:02:04.374 bitratestats: explicitly disabled via build config 00:02:04.374 bpf: explicitly disabled via build config 00:02:04.374 cfgfile: explicitly disabled via build config 00:02:04.374 distributor: explicitly disabled via build config 00:02:04.374 efd: explicitly disabled via build config 00:02:04.374 eventdev: explicitly disabled via build config 00:02:04.374 dispatcher: explicitly disabled via build config 00:02:04.374 gpudev: explicitly disabled via build config 00:02:04.374 gro: explicitly disabled via build config 00:02:04.374 gso: explicitly disabled via build config 00:02:04.374 ip_frag: explicitly disabled via build config 00:02:04.374 jobstats: explicitly disabled via build config 00:02:04.374 latencystats: explicitly disabled via build config 00:02:04.374 lpm: explicitly disabled via build config 00:02:04.374 member: explicitly disabled via build config 00:02:04.374 pcapng: explicitly disabled via build config 00:02:04.374 rawdev: explicitly disabled via build config 00:02:04.374 regexdev: explicitly disabled via build config 00:02:04.374 mldev: explicitly disabled via build config 00:02:04.374 rib: explicitly disabled via build config 00:02:04.374 sched: explicitly disabled via build config 00:02:04.374 stack: explicitly disabled via build config 00:02:04.374 ipsec: explicitly disabled via build config 00:02:04.374 pdcp: explicitly disabled via build config 00:02:04.374 fib: explicitly disabled via build config 00:02:04.374 port: explicitly disabled via build config 00:02:04.374 pdump: explicitly disabled via build config 00:02:04.374 table: explicitly disabled via build config 00:02:04.374 pipeline: explicitly disabled via build config 00:02:04.374 graph: explicitly disabled via build config 00:02:04.374 node: explicitly disabled via build config 00:02:04.374 00:02:04.374 drivers: 00:02:04.374 common/cpt: not in enabled drivers build config 00:02:04.374 common/dpaax: not in enabled drivers build config 00:02:04.374 common/iavf: not in enabled drivers build config 00:02:04.374 common/idpf: not in enabled drivers build config 00:02:04.374 common/ionic: not in enabled drivers build config 00:02:04.374 common/mvep: not in enabled drivers build config 00:02:04.374 common/octeontx: not in enabled drivers build config 00:02:04.374 bus/auxiliary: not in enabled drivers build config 00:02:04.374 bus/cdx: not in enabled drivers build config 00:02:04.374 bus/dpaa: not in enabled drivers build config 00:02:04.374 bus/fslmc: not in enabled drivers build config 00:02:04.374 bus/ifpga: not in enabled drivers build config 00:02:04.374 bus/platform: not in enabled drivers build config 00:02:04.374 bus/uacce: not in enabled drivers build config 00:02:04.374 bus/vmbus: not in enabled drivers build config 00:02:04.374 common/cnxk: not in enabled drivers build config 00:02:04.374 common/mlx5: not in enabled drivers build config 00:02:04.374 common/nfp: not in enabled drivers build config 00:02:04.374 common/nitrox: not in enabled drivers build config 00:02:04.374 common/qat: not in enabled drivers build config 00:02:04.374 common/sfc_efx: not in enabled drivers build config 00:02:04.374 mempool/bucket: not in enabled drivers build config 00:02:04.374 mempool/cnxk: not in enabled drivers build config 00:02:04.374 mempool/dpaa: not in enabled drivers build config 00:02:04.374 mempool/dpaa2: not in enabled drivers build config 00:02:04.374 mempool/octeontx: not in enabled drivers build config 00:02:04.374 mempool/stack: not in enabled drivers build config 00:02:04.374 dma/cnxk: not in enabled drivers build config 00:02:04.374 dma/dpaa: not in enabled drivers build config 00:02:04.374 dma/dpaa2: not in enabled drivers build config 00:02:04.374 dma/hisilicon: not in enabled drivers build config 00:02:04.374 dma/idxd: not in enabled drivers build config 00:02:04.374 dma/ioat: not in enabled drivers build config 00:02:04.374 dma/skeleton: not in enabled drivers build config 00:02:04.375 net/af_packet: not in enabled drivers build config 00:02:04.375 net/af_xdp: not in enabled drivers build config 00:02:04.375 net/ark: not in enabled drivers build config 00:02:04.375 net/atlantic: not in enabled drivers build config 00:02:04.375 net/avp: not in enabled drivers build config 00:02:04.375 net/axgbe: not in enabled drivers build config 00:02:04.375 net/bnx2x: not in enabled drivers build config 00:02:04.375 net/bnxt: not in enabled drivers build config 00:02:04.375 net/bonding: not in enabled drivers build config 00:02:04.375 net/cnxk: not in enabled drivers build config 00:02:04.375 net/cpfl: not in enabled drivers build config 00:02:04.375 net/cxgbe: not in enabled drivers build config 00:02:04.375 net/dpaa: not in enabled drivers build config 00:02:04.375 net/dpaa2: not in enabled drivers build config 00:02:04.375 net/e1000: not in enabled drivers build config 00:02:04.375 net/ena: not in enabled drivers build config 00:02:04.375 net/enetc: not in enabled drivers build config 00:02:04.375 net/enetfec: not in enabled drivers build config 00:02:04.375 net/enic: not in enabled drivers build config 00:02:04.375 net/failsafe: not in enabled drivers build config 00:02:04.375 net/fm10k: not in enabled drivers build config 00:02:04.375 net/gve: not in enabled drivers build config 00:02:04.375 net/hinic: not in enabled drivers build config 00:02:04.375 net/hns3: not in enabled drivers build config 00:02:04.375 net/i40e: not in enabled drivers build config 00:02:04.375 net/iavf: not in enabled drivers build config 00:02:04.375 net/ice: not in enabled drivers build config 00:02:04.375 net/idpf: not in enabled drivers build config 00:02:04.375 net/igc: not in enabled drivers build config 00:02:04.375 net/ionic: not in enabled drivers build config 00:02:04.375 net/ipn3ke: not in enabled drivers build config 00:02:04.375 net/ixgbe: not in enabled drivers build config 00:02:04.375 net/mana: not in enabled drivers build config 00:02:04.375 net/memif: not in enabled drivers build config 00:02:04.375 net/mlx4: not in enabled drivers build config 00:02:04.375 net/mlx5: not in enabled drivers build config 00:02:04.375 net/mvneta: not in enabled drivers build config 00:02:04.375 net/mvpp2: not in enabled drivers build config 00:02:04.375 net/netvsc: not in enabled drivers build config 00:02:04.375 net/nfb: not in enabled drivers build config 00:02:04.375 net/nfp: not in enabled drivers build config 00:02:04.375 net/ngbe: not in enabled drivers build config 00:02:04.375 net/null: not in enabled drivers build config 00:02:04.375 net/octeontx: not in enabled drivers build config 00:02:04.375 net/octeon_ep: not in enabled drivers build config 00:02:04.375 net/pcap: not in enabled drivers build config 00:02:04.375 net/pfe: not in enabled drivers build config 00:02:04.375 net/qede: not in enabled drivers build config 00:02:04.375 net/ring: not in enabled drivers build config 00:02:04.375 net/sfc: not in enabled drivers build config 00:02:04.375 net/softnic: not in enabled drivers build config 00:02:04.375 net/tap: not in enabled drivers build config 00:02:04.375 net/thunderx: not in enabled drivers build config 00:02:04.375 net/txgbe: not in enabled drivers build config 00:02:04.375 net/vdev_netvsc: not in enabled drivers build config 00:02:04.375 net/vhost: not in enabled drivers build config 00:02:04.375 net/virtio: not in enabled drivers build config 00:02:04.375 net/vmxnet3: not in enabled drivers build config 00:02:04.375 raw/*: missing internal dependency, "rawdev" 00:02:04.375 crypto/armv8: not in enabled drivers build config 00:02:04.375 crypto/bcmfs: not in enabled drivers build config 00:02:04.375 crypto/caam_jr: not in enabled drivers build config 00:02:04.375 crypto/ccp: not in enabled drivers build config 00:02:04.375 crypto/cnxk: not in enabled drivers build config 00:02:04.375 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.375 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.375 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.375 crypto/mlx5: not in enabled drivers build config 00:02:04.375 crypto/mvsam: not in enabled drivers build config 00:02:04.375 crypto/nitrox: not in enabled drivers build config 00:02:04.375 crypto/null: not in enabled drivers build config 00:02:04.375 crypto/octeontx: not in enabled drivers build config 00:02:04.375 crypto/openssl: not in enabled drivers build config 00:02:04.375 crypto/scheduler: not in enabled drivers build config 00:02:04.375 crypto/uadk: not in enabled drivers build config 00:02:04.375 crypto/virtio: not in enabled drivers build config 00:02:04.375 compress/isal: not in enabled drivers build config 00:02:04.375 compress/mlx5: not in enabled drivers build config 00:02:04.375 compress/nitrox: not in enabled drivers build config 00:02:04.375 compress/octeontx: not in enabled drivers build config 00:02:04.375 compress/zlib: not in enabled drivers build config 00:02:04.375 regex/*: missing internal dependency, "regexdev" 00:02:04.375 ml/*: missing internal dependency, "mldev" 00:02:04.375 vdpa/ifc: not in enabled drivers build config 00:02:04.375 vdpa/mlx5: not in enabled drivers build config 00:02:04.375 vdpa/nfp: not in enabled drivers build config 00:02:04.375 vdpa/sfc: not in enabled drivers build config 00:02:04.375 event/*: missing internal dependency, "eventdev" 00:02:04.375 baseband/*: missing internal dependency, "bbdev" 00:02:04.375 gpu/*: missing internal dependency, "gpudev" 00:02:04.375 00:02:04.375 00:02:04.375 Build targets in project: 84 00:02:04.375 00:02:04.375 DPDK 24.03.0 00:02:04.375 00:02:04.375 User defined options 00:02:04.375 buildtype : debug 00:02:04.375 default_library : shared 00:02:04.375 libdir : lib 00:02:04.375 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.375 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.375 c_link_args : 00:02:04.375 cpu_instruction_set: native 00:02:04.375 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:04.375 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:04.375 enable_docs : false 00:02:04.375 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:04.375 enable_kmods : false 00:02:04.375 max_lcores : 128 00:02:04.375 tests : false 00:02:04.375 00:02:04.375 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.375 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.375 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.375 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.375 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.375 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.375 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.375 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.375 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.375 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.375 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.375 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.375 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.375 [12/267] Linking static target lib/librte_kvargs.a 00:02:04.375 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.375 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.375 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.375 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.375 [17/267] Linking static target lib/librte_log.a 00:02:04.375 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.375 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.375 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.375 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.375 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.375 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.375 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.375 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.375 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.375 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.375 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.375 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.375 [30/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:04.375 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.375 [32/267] Linking static target lib/librte_pci.a 00:02:04.375 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.375 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.633 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.634 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.634 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:04.634 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:04.634 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:04.634 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:04.634 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.634 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.634 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.634 [44/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.634 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.634 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:04.634 [47/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.634 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.634 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.634 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.634 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:04.634 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:04.634 [53/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:04.895 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.895 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:04.895 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.895 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.895 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.895 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:04.895 [60/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.895 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.895 [62/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.895 [63/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.895 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:04.896 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:04.896 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:04.896 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.896 [68/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.896 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.896 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.896 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.896 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.896 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.896 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:04.896 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.896 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.896 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.896 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.896 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.896 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:04.896 [81/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.896 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.896 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:04.896 [84/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.896 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.896 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.896 [87/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.896 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.896 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.896 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.896 [91/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:04.896 [92/267] Linking static target lib/librte_meter.a 00:02:04.896 [93/267] Linking static target lib/librte_ring.a 00:02:04.896 [94/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:04.896 [95/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:04.896 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.896 [97/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.896 [98/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.896 [99/267] Linking static target lib/librte_telemetry.a 00:02:04.896 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.896 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:04.896 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.896 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.896 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.896 [105/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:04.896 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:04.896 [107/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:04.896 [108/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.896 [109/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.896 [110/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.896 [111/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:04.896 [112/267] Linking static target lib/librte_cmdline.a 00:02:04.896 [113/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:04.896 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:04.896 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.896 [116/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:04.896 [117/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.896 [118/267] Linking static target lib/librte_timer.a 00:02:04.896 [119/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.896 [120/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:04.896 [121/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.896 [122/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:04.896 [123/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.896 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:04.896 [125/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:04.896 [126/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.896 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.896 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:04.896 [129/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:04.896 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:04.896 [131/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.896 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.896 [133/267] Linking static target lib/librte_compressdev.a 00:02:04.896 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.896 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.896 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.896 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.896 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:04.896 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.896 [140/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.896 [141/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.896 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:04.896 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.896 [144/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.896 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.896 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.896 [147/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.896 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:04.896 [149/267] Linking static target lib/librte_rcu.a 00:02:04.896 [150/267] Linking static target lib/librte_mempool.a 00:02:04.896 [151/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.896 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.896 [153/267] Linking target lib/librte_log.so.24.1 00:02:04.896 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:04.896 [155/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.896 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.896 [157/267] Linking static target lib/librte_power.a 00:02:04.896 [158/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.896 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.896 [160/267] Linking static target lib/librte_reorder.a 00:02:04.896 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.896 [162/267] Linking static target lib/librte_dmadev.a 00:02:04.896 [163/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.896 [164/267] Linking static target lib/librte_net.a 00:02:05.159 [165/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.159 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.159 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.159 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.159 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.159 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.159 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.159 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.159 [173/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.159 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.159 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.159 [176/267] Linking static target lib/librte_security.a 00:02:05.159 [177/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.159 [178/267] Linking static target lib/librte_eal.a 00:02:05.159 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.159 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.159 [181/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.159 [182/267] Linking static target lib/librte_mbuf.a 00:02:05.159 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.159 [184/267] Linking static target drivers/librte_bus_vdev.a 00:02:05.159 [185/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.159 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.159 [187/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.159 [188/267] Linking target lib/librte_kvargs.so.24.1 00:02:05.159 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.159 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.159 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.159 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.159 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.159 [194/267] Linking static target lib/librte_hash.a 00:02:05.420 [195/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.420 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.420 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.420 [198/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.420 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.420 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.420 [201/267] Linking static target drivers/librte_bus_pci.a 00:02:05.420 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.420 [203/267] Linking static target drivers/librte_mempool_ring.a 00:02:05.420 [204/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.420 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [207/267] Linking static target lib/librte_cryptodev.a 00:02:05.420 [208/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.420 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.420 [213/267] Linking target lib/librte_telemetry.so.24.1 00:02:05.681 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:05.681 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.681 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.681 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.942 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:05.942 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.942 [220/267] Linking static target lib/librte_ethdev.a 00:02:05.942 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.942 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.942 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.203 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.203 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.203 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.776 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.776 [228/267] Linking static target lib/librte_vhost.a 00:02:07.721 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.109 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.701 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.646 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.907 [233/267] Linking target lib/librte_eal.so.24.1 00:02:16.907 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:16.907 [235/267] Linking target lib/librte_ring.so.24.1 00:02:16.907 [236/267] Linking target lib/librte_meter.so.24.1 00:02:16.907 [237/267] Linking target lib/librte_pci.so.24.1 00:02:16.907 [238/267] Linking target lib/librte_timer.so.24.1 00:02:16.907 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:16.907 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.169 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.169 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.169 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.169 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.169 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.169 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:17.169 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:17.169 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.430 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.430 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.430 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:17.430 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.430 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.430 [254/267] Linking target lib/librte_net.so.24.1 00:02:17.430 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:17.430 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:02:17.430 [257/267] Linking target lib/librte_reorder.so.24.1 00:02:17.690 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.690 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.690 [260/267] Linking target lib/librte_hash.so.24.1 00:02:17.690 [261/267] Linking target lib/librte_security.so.24.1 00:02:17.690 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:17.690 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:17.951 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:17.951 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:17.951 [266/267] Linking target lib/librte_power.so.24.1 00:02:17.951 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:17.951 INFO: autodetecting backend as ninja 00:02:17.951 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:21.254 CC lib/log/log.o 00:02:21.254 CC lib/log/log_flags.o 00:02:21.254 CC lib/log/log_deprecated.o 00:02:21.254 CC lib/ut_mock/mock.o 00:02:21.254 CC lib/ut/ut.o 00:02:21.254 LIB libspdk_ut.a 00:02:21.254 LIB libspdk_ut_mock.a 00:02:21.254 LIB libspdk_log.a 00:02:21.254 SO libspdk_ut_mock.so.6.0 00:02:21.254 SO libspdk_ut.so.2.0 00:02:21.254 SO libspdk_log.so.7.1 00:02:21.254 SYMLINK libspdk_ut_mock.so 00:02:21.254 SYMLINK libspdk_ut.so 00:02:21.254 SYMLINK libspdk_log.so 00:02:21.515 CC lib/dma/dma.o 00:02:21.515 CC lib/util/base64.o 00:02:21.515 CC lib/ioat/ioat.o 00:02:21.515 CC lib/util/bit_array.o 00:02:21.515 CXX lib/trace_parser/trace.o 00:02:21.515 CC lib/util/cpuset.o 00:02:21.515 CC lib/util/crc16.o 00:02:21.515 CC lib/util/crc32.o 00:02:21.515 CC lib/util/crc32c.o 00:02:21.515 CC lib/util/crc32_ieee.o 00:02:21.515 CC lib/util/crc64.o 00:02:21.776 CC lib/util/dif.o 00:02:21.776 CC lib/util/fd.o 00:02:21.776 CC lib/util/fd_group.o 00:02:21.776 CC lib/util/file.o 00:02:21.776 CC lib/util/hexlify.o 00:02:21.776 CC lib/util/iov.o 00:02:21.776 CC lib/util/math.o 00:02:21.776 CC lib/util/net.o 00:02:21.776 CC lib/util/pipe.o 00:02:21.776 CC lib/util/strerror_tls.o 00:02:21.776 CC lib/util/string.o 00:02:21.776 CC lib/util/uuid.o 00:02:21.776 CC lib/util/xor.o 00:02:21.776 CC lib/util/zipf.o 00:02:21.776 CC lib/util/md5.o 00:02:21.776 CC lib/vfio_user/host/vfio_user.o 00:02:21.776 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.776 LIB libspdk_dma.a 00:02:21.776 SO libspdk_dma.so.5.0 00:02:22.036 LIB libspdk_ioat.a 00:02:22.036 SYMLINK libspdk_dma.so 00:02:22.036 SO libspdk_ioat.so.7.0 00:02:22.036 SYMLINK libspdk_ioat.so 00:02:22.036 LIB libspdk_vfio_user.a 00:02:22.036 SO libspdk_vfio_user.so.5.0 00:02:22.036 SYMLINK libspdk_vfio_user.so 00:02:22.297 LIB libspdk_util.a 00:02:22.297 LIB libspdk_trace_parser.a 00:02:22.297 SO libspdk_util.so.10.1 00:02:22.297 SO libspdk_trace_parser.so.6.0 00:02:22.297 SYMLINK libspdk_trace_parser.so 00:02:22.297 SYMLINK libspdk_util.so 00:02:22.868 CC lib/conf/conf.o 00:02:22.868 CC lib/env_dpdk/env.o 00:02:22.868 CC lib/json/json_parse.o 00:02:22.868 CC lib/rdma_utils/rdma_utils.o 00:02:22.868 CC lib/env_dpdk/memory.o 00:02:22.868 CC lib/json/json_util.o 00:02:22.868 CC lib/env_dpdk/pci.o 00:02:22.868 CC lib/vmd/vmd.o 00:02:22.868 CC lib/env_dpdk/init.o 00:02:22.868 CC lib/vmd/led.o 00:02:22.868 CC lib/json/json_write.o 00:02:22.868 CC lib/env_dpdk/threads.o 00:02:22.868 CC lib/env_dpdk/pci_ioat.o 00:02:22.868 CC lib/idxd/idxd.o 00:02:22.868 CC lib/env_dpdk/pci_virtio.o 00:02:22.868 CC lib/env_dpdk/pci_vmd.o 00:02:22.868 CC lib/idxd/idxd_user.o 00:02:22.868 CC lib/idxd/idxd_kernel.o 00:02:22.868 CC lib/env_dpdk/pci_idxd.o 00:02:22.868 CC lib/env_dpdk/pci_event.o 00:02:22.868 CC lib/env_dpdk/sigbus_handler.o 00:02:22.868 CC lib/env_dpdk/pci_dpdk.o 00:02:22.868 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.868 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.129 LIB libspdk_conf.a 00:02:23.129 SO libspdk_conf.so.6.0 00:02:23.129 LIB libspdk_rdma_utils.a 00:02:23.129 LIB libspdk_json.a 00:02:23.129 SYMLINK libspdk_conf.so 00:02:23.129 SO libspdk_rdma_utils.so.1.0 00:02:23.129 SO libspdk_json.so.6.0 00:02:23.129 SYMLINK libspdk_rdma_utils.so 00:02:23.129 SYMLINK libspdk_json.so 00:02:23.390 LIB libspdk_idxd.a 00:02:23.390 SO libspdk_idxd.so.12.1 00:02:23.390 LIB libspdk_vmd.a 00:02:23.390 SO libspdk_vmd.so.6.0 00:02:23.390 SYMLINK libspdk_idxd.so 00:02:23.651 SYMLINK libspdk_vmd.so 00:02:23.651 CC lib/rdma_provider/common.o 00:02:23.651 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.651 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.651 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.651 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.651 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.651 LIB libspdk_rdma_provider.a 00:02:23.930 LIB libspdk_jsonrpc.a 00:02:23.930 SO libspdk_rdma_provider.so.7.0 00:02:23.930 SO libspdk_jsonrpc.so.6.0 00:02:23.930 SYMLINK libspdk_rdma_provider.so 00:02:23.930 SYMLINK libspdk_jsonrpc.so 00:02:23.930 LIB libspdk_env_dpdk.a 00:02:24.267 SO libspdk_env_dpdk.so.15.1 00:02:24.267 SYMLINK libspdk_env_dpdk.so 00:02:24.267 CC lib/rpc/rpc.o 00:02:24.592 LIB libspdk_rpc.a 00:02:24.592 SO libspdk_rpc.so.6.0 00:02:24.592 SYMLINK libspdk_rpc.so 00:02:25.166 CC lib/trace/trace.o 00:02:25.166 CC lib/keyring/keyring.o 00:02:25.166 CC lib/trace/trace_flags.o 00:02:25.166 CC lib/keyring/keyring_rpc.o 00:02:25.166 CC lib/trace/trace_rpc.o 00:02:25.166 CC lib/notify/notify.o 00:02:25.166 CC lib/notify/notify_rpc.o 00:02:25.166 LIB libspdk_notify.a 00:02:25.166 LIB libspdk_keyring.a 00:02:25.166 SO libspdk_notify.so.6.0 00:02:25.166 SO libspdk_keyring.so.2.0 00:02:25.166 LIB libspdk_trace.a 00:02:25.166 SYMLINK libspdk_notify.so 00:02:25.426 SO libspdk_trace.so.11.0 00:02:25.426 SYMLINK libspdk_keyring.so 00:02:25.426 SYMLINK libspdk_trace.so 00:02:25.687 CC lib/sock/sock.o 00:02:25.687 CC lib/sock/sock_rpc.o 00:02:25.687 CC lib/thread/thread.o 00:02:25.687 CC lib/thread/iobuf.o 00:02:26.259 LIB libspdk_sock.a 00:02:26.259 SO libspdk_sock.so.10.0 00:02:26.259 SYMLINK libspdk_sock.so 00:02:26.520 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.520 CC lib/nvme/nvme_ctrlr.o 00:02:26.520 CC lib/nvme/nvme_fabric.o 00:02:26.520 CC lib/nvme/nvme_ns_cmd.o 00:02:26.520 CC lib/nvme/nvme_ns.o 00:02:26.520 CC lib/nvme/nvme_pcie_common.o 00:02:26.520 CC lib/nvme/nvme_pcie.o 00:02:26.520 CC lib/nvme/nvme_qpair.o 00:02:26.520 CC lib/nvme/nvme.o 00:02:26.520 CC lib/nvme/nvme_quirks.o 00:02:26.520 CC lib/nvme/nvme_transport.o 00:02:26.520 CC lib/nvme/nvme_discovery.o 00:02:26.520 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.520 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.520 CC lib/nvme/nvme_tcp.o 00:02:26.520 CC lib/nvme/nvme_opal.o 00:02:26.520 CC lib/nvme/nvme_io_msg.o 00:02:26.520 CC lib/nvme/nvme_poll_group.o 00:02:26.520 CC lib/nvme/nvme_zns.o 00:02:26.520 CC lib/nvme/nvme_stubs.o 00:02:26.520 CC lib/nvme/nvme_auth.o 00:02:26.520 CC lib/nvme/nvme_cuse.o 00:02:26.520 CC lib/nvme/nvme_vfio_user.o 00:02:26.520 CC lib/nvme/nvme_rdma.o 00:02:27.091 LIB libspdk_thread.a 00:02:27.091 SO libspdk_thread.so.11.0 00:02:27.091 SYMLINK libspdk_thread.so 00:02:27.663 CC lib/fsdev/fsdev.o 00:02:27.663 CC lib/fsdev/fsdev_io.o 00:02:27.663 CC lib/fsdev/fsdev_rpc.o 00:02:27.663 CC lib/init/json_config.o 00:02:27.663 CC lib/init/subsystem.o 00:02:27.663 CC lib/init/subsystem_rpc.o 00:02:27.663 CC lib/init/rpc.o 00:02:27.663 CC lib/vfu_tgt/tgt_endpoint.o 00:02:27.663 CC lib/vfu_tgt/tgt_rpc.o 00:02:27.663 CC lib/virtio/virtio.o 00:02:27.663 CC lib/blob/blobstore.o 00:02:27.663 CC lib/virtio/virtio_vhost_user.o 00:02:27.663 CC lib/accel/accel.o 00:02:27.663 CC lib/blob/request.o 00:02:27.663 CC lib/virtio/virtio_vfio_user.o 00:02:27.663 CC lib/accel/accel_rpc.o 00:02:27.663 CC lib/blob/zeroes.o 00:02:27.663 CC lib/virtio/virtio_pci.o 00:02:27.663 CC lib/accel/accel_sw.o 00:02:27.663 CC lib/blob/blob_bs_dev.o 00:02:27.924 LIB libspdk_init.a 00:02:27.924 SO libspdk_init.so.6.0 00:02:27.924 LIB libspdk_vfu_tgt.a 00:02:27.924 LIB libspdk_virtio.a 00:02:27.924 SO libspdk_vfu_tgt.so.3.0 00:02:27.924 SYMLINK libspdk_init.so 00:02:27.924 SO libspdk_virtio.so.7.0 00:02:27.924 SYMLINK libspdk_vfu_tgt.so 00:02:27.924 SYMLINK libspdk_virtio.so 00:02:28.186 LIB libspdk_fsdev.a 00:02:28.186 SO libspdk_fsdev.so.2.0 00:02:28.186 CC lib/event/app.o 00:02:28.186 CC lib/event/reactor.o 00:02:28.186 CC lib/event/log_rpc.o 00:02:28.186 CC lib/event/app_rpc.o 00:02:28.186 CC lib/event/scheduler_static.o 00:02:28.186 SYMLINK libspdk_fsdev.so 00:02:28.447 LIB libspdk_accel.a 00:02:28.447 LIB libspdk_nvme.a 00:02:28.709 SO libspdk_accel.so.16.0 00:02:28.709 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:28.709 SYMLINK libspdk_accel.so 00:02:28.709 LIB libspdk_event.a 00:02:28.709 SO libspdk_nvme.so.15.0 00:02:28.709 SO libspdk_event.so.14.0 00:02:28.709 SYMLINK libspdk_event.so 00:02:28.970 SYMLINK libspdk_nvme.so 00:02:28.970 CC lib/bdev/bdev.o 00:02:28.970 CC lib/bdev/bdev_rpc.o 00:02:28.970 CC lib/bdev/bdev_zone.o 00:02:28.970 CC lib/bdev/part.o 00:02:28.970 CC lib/bdev/scsi_nvme.o 00:02:29.232 LIB libspdk_fuse_dispatcher.a 00:02:29.232 SO libspdk_fuse_dispatcher.so.1.0 00:02:29.232 SYMLINK libspdk_fuse_dispatcher.so 00:02:30.175 LIB libspdk_blob.a 00:02:30.175 SO libspdk_blob.so.12.0 00:02:30.444 SYMLINK libspdk_blob.so 00:02:30.705 CC lib/blobfs/blobfs.o 00:02:30.705 CC lib/blobfs/tree.o 00:02:30.705 CC lib/lvol/lvol.o 00:02:31.644 LIB libspdk_bdev.a 00:02:31.644 LIB libspdk_blobfs.a 00:02:31.644 SO libspdk_bdev.so.17.0 00:02:31.644 SO libspdk_blobfs.so.11.0 00:02:31.644 LIB libspdk_lvol.a 00:02:31.644 SYMLINK libspdk_bdev.so 00:02:31.644 SYMLINK libspdk_blobfs.so 00:02:31.644 SO libspdk_lvol.so.11.0 00:02:31.644 SYMLINK libspdk_lvol.so 00:02:31.904 CC lib/nvmf/ctrlr.o 00:02:31.904 CC lib/nvmf/ctrlr_discovery.o 00:02:31.904 CC lib/nvmf/ctrlr_bdev.o 00:02:31.904 CC lib/nbd/nbd.o 00:02:31.904 CC lib/nvmf/subsystem.o 00:02:31.904 CC lib/nvmf/nvmf.o 00:02:31.904 CC lib/nbd/nbd_rpc.o 00:02:31.904 CC lib/nvmf/nvmf_rpc.o 00:02:31.904 CC lib/nvmf/transport.o 00:02:31.904 CC lib/nvmf/tcp.o 00:02:31.904 CC lib/ublk/ublk.o 00:02:31.904 CC lib/nvmf/stubs.o 00:02:31.904 CC lib/ublk/ublk_rpc.o 00:02:31.904 CC lib/scsi/dev.o 00:02:31.904 CC lib/nvmf/mdns_server.o 00:02:31.904 CC lib/scsi/lun.o 00:02:31.904 CC lib/nvmf/vfio_user.o 00:02:31.904 CC lib/ftl/ftl_core.o 00:02:31.904 CC lib/scsi/port.o 00:02:31.904 CC lib/nvmf/rdma.o 00:02:31.904 CC lib/ftl/ftl_init.o 00:02:31.904 CC lib/scsi/scsi.o 00:02:31.904 CC lib/nvmf/auth.o 00:02:31.904 CC lib/ftl/ftl_layout.o 00:02:31.904 CC lib/scsi/scsi_bdev.o 00:02:31.904 CC lib/ftl/ftl_debug.o 00:02:31.904 CC lib/scsi/scsi_pr.o 00:02:31.904 CC lib/scsi/scsi_rpc.o 00:02:31.904 CC lib/ftl/ftl_io.o 00:02:31.904 CC lib/scsi/task.o 00:02:31.904 CC lib/ftl/ftl_sb.o 00:02:31.904 CC lib/ftl/ftl_l2p.o 00:02:31.904 CC lib/ftl/ftl_l2p_flat.o 00:02:31.904 CC lib/ftl/ftl_nv_cache.o 00:02:31.904 CC lib/ftl/ftl_band.o 00:02:31.904 CC lib/ftl/ftl_band_ops.o 00:02:31.904 CC lib/ftl/ftl_writer.o 00:02:31.904 CC lib/ftl/ftl_rq.o 00:02:31.904 CC lib/ftl/ftl_reloc.o 00:02:31.904 CC lib/ftl/ftl_l2p_cache.o 00:02:31.904 CC lib/ftl/ftl_p2l.o 00:02:31.904 CC lib/ftl/ftl_p2l_log.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.904 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.904 CC lib/ftl/utils/ftl_conf.o 00:02:31.904 CC lib/ftl/utils/ftl_md.o 00:02:31.904 CC lib/ftl/utils/ftl_mempool.o 00:02:31.904 CC lib/ftl/utils/ftl_property.o 00:02:31.904 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.904 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.904 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:31.904 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.904 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.904 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.904 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.904 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:31.904 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:31.904 CC lib/ftl/base/ftl_base_dev.o 00:02:31.904 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.904 CC lib/ftl/ftl_trace.o 00:02:32.875 LIB libspdk_nbd.a 00:02:32.875 SO libspdk_nbd.so.7.0 00:02:32.875 LIB libspdk_scsi.a 00:02:32.875 SYMLINK libspdk_nbd.so 00:02:32.875 SO libspdk_scsi.so.9.0 00:02:32.875 LIB libspdk_ublk.a 00:02:32.875 SYMLINK libspdk_scsi.so 00:02:32.875 SO libspdk_ublk.so.3.0 00:02:32.875 SYMLINK libspdk_ublk.so 00:02:33.138 LIB libspdk_ftl.a 00:02:33.138 CC lib/vhost/vhost.o 00:02:33.138 CC lib/vhost/vhost_rpc.o 00:02:33.138 CC lib/vhost/vhost_scsi.o 00:02:33.138 CC lib/vhost/vhost_blk.o 00:02:33.138 CC lib/iscsi/conn.o 00:02:33.138 CC lib/vhost/rte_vhost_user.o 00:02:33.138 CC lib/iscsi/init_grp.o 00:02:33.138 CC lib/iscsi/iscsi.o 00:02:33.138 CC lib/iscsi/param.o 00:02:33.138 CC lib/iscsi/portal_grp.o 00:02:33.138 CC lib/iscsi/tgt_node.o 00:02:33.138 CC lib/iscsi/iscsi_subsystem.o 00:02:33.138 CC lib/iscsi/iscsi_rpc.o 00:02:33.138 CC lib/iscsi/task.o 00:02:33.398 SO libspdk_ftl.so.9.0 00:02:33.658 SYMLINK libspdk_ftl.so 00:02:33.919 LIB libspdk_nvmf.a 00:02:34.180 SO libspdk_nvmf.so.20.0 00:02:34.180 LIB libspdk_vhost.a 00:02:34.180 SO libspdk_vhost.so.8.0 00:02:34.441 SYMLINK libspdk_nvmf.so 00:02:34.441 SYMLINK libspdk_vhost.so 00:02:34.441 LIB libspdk_iscsi.a 00:02:34.441 SO libspdk_iscsi.so.8.0 00:02:34.701 SYMLINK libspdk_iscsi.so 00:02:35.273 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.273 CC module/vfu_device/vfu_virtio.o 00:02:35.273 CC module/vfu_device/vfu_virtio_blk.o 00:02:35.273 CC module/vfu_device/vfu_virtio_scsi.o 00:02:35.273 CC module/vfu_device/vfu_virtio_rpc.o 00:02:35.273 CC module/vfu_device/vfu_virtio_fs.o 00:02:35.534 LIB libspdk_env_dpdk_rpc.a 00:02:35.534 CC module/accel/dsa/accel_dsa.o 00:02:35.534 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.534 CC module/accel/error/accel_error.o 00:02:35.534 CC module/blob/bdev/blob_bdev.o 00:02:35.534 CC module/accel/error/accel_error_rpc.o 00:02:35.534 CC module/accel/ioat/accel_ioat.o 00:02:35.534 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.534 CC module/sock/posix/posix.o 00:02:35.534 CC module/fsdev/aio/fsdev_aio.o 00:02:35.534 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.534 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:35.534 CC module/fsdev/aio/linux_aio_mgr.o 00:02:35.534 CC module/accel/iaa/accel_iaa.o 00:02:35.534 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.534 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.534 CC module/keyring/file/keyring.o 00:02:35.534 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.534 CC module/keyring/file/keyring_rpc.o 00:02:35.534 CC module/keyring/linux/keyring.o 00:02:35.534 CC module/keyring/linux/keyring_rpc.o 00:02:35.534 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.534 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.534 LIB libspdk_keyring_file.a 00:02:35.534 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.534 LIB libspdk_keyring_linux.a 00:02:35.534 LIB libspdk_accel_ioat.a 00:02:35.795 LIB libspdk_scheduler_gscheduler.a 00:02:35.795 SO libspdk_keyring_file.so.2.0 00:02:35.795 LIB libspdk_accel_error.a 00:02:35.795 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.795 LIB libspdk_scheduler_dynamic.a 00:02:35.795 SO libspdk_keyring_linux.so.1.0 00:02:35.795 SO libspdk_accel_ioat.so.6.0 00:02:35.795 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.795 LIB libspdk_accel_iaa.a 00:02:35.795 SO libspdk_accel_error.so.2.0 00:02:35.795 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.795 SYMLINK libspdk_keyring_file.so 00:02:35.795 LIB libspdk_accel_dsa.a 00:02:35.795 SO libspdk_accel_iaa.so.3.0 00:02:35.795 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.795 LIB libspdk_blob_bdev.a 00:02:35.795 SYMLINK libspdk_keyring_linux.so 00:02:35.795 SYMLINK libspdk_accel_ioat.so 00:02:35.795 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.795 SO libspdk_accel_dsa.so.5.0 00:02:35.795 SO libspdk_blob_bdev.so.12.0 00:02:35.795 SYMLINK libspdk_accel_error.so 00:02:35.795 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.795 SYMLINK libspdk_accel_iaa.so 00:02:35.795 SYMLINK libspdk_blob_bdev.so 00:02:35.795 SYMLINK libspdk_accel_dsa.so 00:02:35.795 LIB libspdk_vfu_device.a 00:02:36.056 SO libspdk_vfu_device.so.3.0 00:02:36.056 SYMLINK libspdk_vfu_device.so 00:02:36.056 LIB libspdk_fsdev_aio.a 00:02:36.056 SO libspdk_fsdev_aio.so.1.0 00:02:36.056 LIB libspdk_sock_posix.a 00:02:36.317 SO libspdk_sock_posix.so.6.0 00:02:36.317 SYMLINK libspdk_fsdev_aio.so 00:02:36.317 SYMLINK libspdk_sock_posix.so 00:02:36.317 CC module/bdev/delay/vbdev_delay.o 00:02:36.317 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.317 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.317 CC module/bdev/error/vbdev_error.o 00:02:36.317 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.317 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.317 CC module/bdev/malloc/bdev_malloc.o 00:02:36.317 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.317 CC module/bdev/gpt/gpt.o 00:02:36.317 CC module/bdev/nvme/bdev_nvme.o 00:02:36.317 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.317 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.317 CC module/bdev/nvme/nvme_rpc.o 00:02:36.317 CC module/bdev/aio/bdev_aio.o 00:02:36.317 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.317 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.317 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.317 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.317 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.317 CC module/bdev/nvme/vbdev_opal.o 00:02:36.317 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.317 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.317 CC module/bdev/null/bdev_null.o 00:02:36.317 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.317 CC module/bdev/null/bdev_null_rpc.o 00:02:36.579 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.579 CC module/bdev/raid/bdev_raid.o 00:02:36.579 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.579 CC module/bdev/split/vbdev_split.o 00:02:36.579 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.579 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.579 CC module/bdev/raid/raid0.o 00:02:36.579 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.579 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.579 CC module/bdev/raid/raid1.o 00:02:36.579 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.579 CC module/bdev/ftl/bdev_ftl.o 00:02:36.579 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.579 CC module/bdev/raid/concat.o 00:02:36.579 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.579 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.579 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.839 LIB libspdk_blobfs_bdev.a 00:02:36.839 SO libspdk_blobfs_bdev.so.6.0 00:02:36.839 LIB libspdk_bdev_error.a 00:02:36.839 LIB libspdk_bdev_split.a 00:02:36.839 LIB libspdk_bdev_null.a 00:02:36.839 LIB libspdk_bdev_gpt.a 00:02:36.839 SO libspdk_bdev_error.so.6.0 00:02:36.839 SO libspdk_bdev_null.so.6.0 00:02:36.839 SO libspdk_bdev_split.so.6.0 00:02:36.839 LIB libspdk_bdev_ftl.a 00:02:36.839 SYMLINK libspdk_blobfs_bdev.so 00:02:36.839 SO libspdk_bdev_gpt.so.6.0 00:02:36.839 LIB libspdk_bdev_aio.a 00:02:36.839 LIB libspdk_bdev_passthru.a 00:02:36.839 LIB libspdk_bdev_delay.a 00:02:36.839 LIB libspdk_bdev_malloc.a 00:02:36.839 SO libspdk_bdev_ftl.so.6.0 00:02:36.839 SYMLINK libspdk_bdev_error.so 00:02:36.839 SYMLINK libspdk_bdev_split.so 00:02:36.839 SO libspdk_bdev_aio.so.6.0 00:02:36.839 SYMLINK libspdk_bdev_null.so 00:02:36.839 LIB libspdk_bdev_zone_block.a 00:02:36.839 SO libspdk_bdev_passthru.so.6.0 00:02:36.839 SO libspdk_bdev_delay.so.6.0 00:02:36.839 LIB libspdk_bdev_iscsi.a 00:02:36.839 SYMLINK libspdk_bdev_gpt.so 00:02:36.839 SO libspdk_bdev_malloc.so.6.0 00:02:36.839 SO libspdk_bdev_zone_block.so.6.0 00:02:36.839 SO libspdk_bdev_iscsi.so.6.0 00:02:37.101 SYMLINK libspdk_bdev_ftl.so 00:02:37.101 SYMLINK libspdk_bdev_aio.so 00:02:37.101 SYMLINK libspdk_bdev_delay.so 00:02:37.101 SYMLINK libspdk_bdev_passthru.so 00:02:37.101 SYMLINK libspdk_bdev_malloc.so 00:02:37.101 SYMLINK libspdk_bdev_zone_block.so 00:02:37.101 SYMLINK libspdk_bdev_iscsi.so 00:02:37.101 LIB libspdk_bdev_lvol.a 00:02:37.101 LIB libspdk_bdev_virtio.a 00:02:37.101 SO libspdk_bdev_lvol.so.6.0 00:02:37.101 SO libspdk_bdev_virtio.so.6.0 00:02:37.101 SYMLINK libspdk_bdev_lvol.so 00:02:37.101 SYMLINK libspdk_bdev_virtio.so 00:02:37.362 LIB libspdk_bdev_raid.a 00:02:37.624 SO libspdk_bdev_raid.so.6.0 00:02:37.624 SYMLINK libspdk_bdev_raid.so 00:02:39.010 LIB libspdk_bdev_nvme.a 00:02:39.010 SO libspdk_bdev_nvme.so.7.1 00:02:39.010 SYMLINK libspdk_bdev_nvme.so 00:02:39.952 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.952 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.952 CC module/event/subsystems/keyring/keyring.o 00:02:39.952 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.952 CC module/event/subsystems/sock/sock.o 00:02:39.952 CC module/event/subsystems/vmd/vmd.o 00:02:39.952 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.952 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:39.952 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.952 CC module/event/subsystems/fsdev/fsdev.o 00:02:39.952 LIB libspdk_event_vfu_tgt.a 00:02:39.952 LIB libspdk_event_keyring.a 00:02:39.952 LIB libspdk_event_vhost_blk.a 00:02:39.952 LIB libspdk_event_iobuf.a 00:02:39.952 LIB libspdk_event_sock.a 00:02:39.952 LIB libspdk_event_vmd.a 00:02:39.952 LIB libspdk_event_scheduler.a 00:02:39.952 LIB libspdk_event_fsdev.a 00:02:39.952 SO libspdk_event_vfu_tgt.so.3.0 00:02:39.952 SO libspdk_event_keyring.so.1.0 00:02:39.952 SO libspdk_event_vhost_blk.so.3.0 00:02:39.952 SO libspdk_event_iobuf.so.3.0 00:02:39.952 SO libspdk_event_scheduler.so.4.0 00:02:39.952 SO libspdk_event_sock.so.5.0 00:02:39.952 SO libspdk_event_fsdev.so.1.0 00:02:39.952 SO libspdk_event_vmd.so.6.0 00:02:39.952 SYMLINK libspdk_event_vfu_tgt.so 00:02:39.952 SYMLINK libspdk_event_keyring.so 00:02:39.952 SYMLINK libspdk_event_vhost_blk.so 00:02:39.952 SYMLINK libspdk_event_sock.so 00:02:39.952 SYMLINK libspdk_event_scheduler.so 00:02:39.952 SYMLINK libspdk_event_iobuf.so 00:02:39.952 SYMLINK libspdk_event_fsdev.so 00:02:39.952 SYMLINK libspdk_event_vmd.so 00:02:40.523 CC module/event/subsystems/accel/accel.o 00:02:40.523 LIB libspdk_event_accel.a 00:02:40.523 SO libspdk_event_accel.so.6.0 00:02:40.784 SYMLINK libspdk_event_accel.so 00:02:41.045 CC module/event/subsystems/bdev/bdev.o 00:02:41.305 LIB libspdk_event_bdev.a 00:02:41.305 SO libspdk_event_bdev.so.6.0 00:02:41.305 SYMLINK libspdk_event_bdev.so 00:02:41.566 CC module/event/subsystems/nbd/nbd.o 00:02:41.566 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.566 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.566 CC module/event/subsystems/ublk/ublk.o 00:02:41.566 CC module/event/subsystems/scsi/scsi.o 00:02:41.826 LIB libspdk_event_nbd.a 00:02:41.826 LIB libspdk_event_ublk.a 00:02:41.826 LIB libspdk_event_scsi.a 00:02:41.826 SO libspdk_event_nbd.so.6.0 00:02:41.826 SO libspdk_event_ublk.so.3.0 00:02:41.826 SO libspdk_event_scsi.so.6.0 00:02:41.826 LIB libspdk_event_nvmf.a 00:02:41.826 SYMLINK libspdk_event_nbd.so 00:02:41.826 SO libspdk_event_nvmf.so.6.0 00:02:41.826 SYMLINK libspdk_event_ublk.so 00:02:42.087 SYMLINK libspdk_event_scsi.so 00:02:42.087 SYMLINK libspdk_event_nvmf.so 00:02:42.348 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.348 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.609 LIB libspdk_event_vhost_scsi.a 00:02:42.609 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.609 LIB libspdk_event_iscsi.a 00:02:42.609 SO libspdk_event_iscsi.so.6.0 00:02:42.609 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.609 SYMLINK libspdk_event_iscsi.so 00:02:42.868 SO libspdk.so.6.0 00:02:42.868 SYMLINK libspdk.so 00:02:43.129 CXX app/trace/trace.o 00:02:43.129 CC app/trace_record/trace_record.o 00:02:43.129 CC app/spdk_top/spdk_top.o 00:02:43.129 TEST_HEADER include/spdk/accel.h 00:02:43.129 TEST_HEADER include/spdk/accel_module.h 00:02:43.129 TEST_HEADER include/spdk/assert.h 00:02:43.129 TEST_HEADER include/spdk/barrier.h 00:02:43.129 CC test/rpc_client/rpc_client_test.o 00:02:43.129 TEST_HEADER include/spdk/base64.h 00:02:43.129 CC app/spdk_nvme_perf/perf.o 00:02:43.129 CC app/spdk_nvme_identify/identify.o 00:02:43.129 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.129 TEST_HEADER include/spdk/bdev.h 00:02:43.392 CC app/spdk_lspci/spdk_lspci.o 00:02:43.392 TEST_HEADER include/spdk/bdev_module.h 00:02:43.392 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.392 TEST_HEADER include/spdk/bit_array.h 00:02:43.393 TEST_HEADER include/spdk/bit_pool.h 00:02:43.393 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.393 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.393 TEST_HEADER include/spdk/blobfs.h 00:02:43.393 TEST_HEADER include/spdk/blob.h 00:02:43.393 TEST_HEADER include/spdk/conf.h 00:02:43.393 TEST_HEADER include/spdk/config.h 00:02:43.393 TEST_HEADER include/spdk/cpuset.h 00:02:43.393 TEST_HEADER include/spdk/crc16.h 00:02:43.393 TEST_HEADER include/spdk/crc64.h 00:02:43.393 TEST_HEADER include/spdk/crc32.h 00:02:43.393 TEST_HEADER include/spdk/dif.h 00:02:43.393 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.393 TEST_HEADER include/spdk/endian.h 00:02:43.393 TEST_HEADER include/spdk/dma.h 00:02:43.393 TEST_HEADER include/spdk/env.h 00:02:43.393 TEST_HEADER include/spdk/event.h 00:02:43.393 TEST_HEADER include/spdk/fd_group.h 00:02:43.393 TEST_HEADER include/spdk/fd.h 00:02:43.393 TEST_HEADER include/spdk/file.h 00:02:43.393 TEST_HEADER include/spdk/fsdev.h 00:02:43.393 TEST_HEADER include/spdk/fsdev_module.h 00:02:43.393 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:43.393 TEST_HEADER include/spdk/ftl.h 00:02:43.393 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.393 TEST_HEADER include/spdk/hexlify.h 00:02:43.393 TEST_HEADER include/spdk/idxd.h 00:02:43.393 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.393 TEST_HEADER include/spdk/histogram_data.h 00:02:43.393 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.393 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.393 TEST_HEADER include/spdk/init.h 00:02:43.393 TEST_HEADER include/spdk/ioat.h 00:02:43.393 CC app/spdk_dd/spdk_dd.o 00:02:43.393 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.393 CC app/nvmf_tgt/nvmf_main.o 00:02:43.393 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.393 TEST_HEADER include/spdk/json.h 00:02:43.393 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.393 TEST_HEADER include/spdk/keyring.h 00:02:43.393 TEST_HEADER include/spdk/keyring_module.h 00:02:43.393 CC app/spdk_tgt/spdk_tgt.o 00:02:43.393 TEST_HEADER include/spdk/likely.h 00:02:43.393 TEST_HEADER include/spdk/log.h 00:02:43.393 TEST_HEADER include/spdk/lvol.h 00:02:43.393 TEST_HEADER include/spdk/md5.h 00:02:43.393 TEST_HEADER include/spdk/memory.h 00:02:43.393 TEST_HEADER include/spdk/mmio.h 00:02:43.393 TEST_HEADER include/spdk/nbd.h 00:02:43.393 TEST_HEADER include/spdk/net.h 00:02:43.393 TEST_HEADER include/spdk/notify.h 00:02:43.393 TEST_HEADER include/spdk/nvme.h 00:02:43.393 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.393 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.393 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.393 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.393 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.393 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.393 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.393 TEST_HEADER include/spdk/nvmf.h 00:02:43.393 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.393 TEST_HEADER include/spdk/opal_spec.h 00:02:43.393 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.393 TEST_HEADER include/spdk/opal.h 00:02:43.393 TEST_HEADER include/spdk/pci_ids.h 00:02:43.393 TEST_HEADER include/spdk/pipe.h 00:02:43.393 TEST_HEADER include/spdk/queue.h 00:02:43.393 TEST_HEADER include/spdk/reduce.h 00:02:43.393 TEST_HEADER include/spdk/rpc.h 00:02:43.393 TEST_HEADER include/spdk/scheduler.h 00:02:43.393 TEST_HEADER include/spdk/scsi.h 00:02:43.393 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.393 TEST_HEADER include/spdk/sock.h 00:02:43.393 TEST_HEADER include/spdk/stdinc.h 00:02:43.393 TEST_HEADER include/spdk/string.h 00:02:43.393 TEST_HEADER include/spdk/thread.h 00:02:43.393 TEST_HEADER include/spdk/trace.h 00:02:43.393 TEST_HEADER include/spdk/trace_parser.h 00:02:43.393 TEST_HEADER include/spdk/tree.h 00:02:43.393 TEST_HEADER include/spdk/ublk.h 00:02:43.393 TEST_HEADER include/spdk/util.h 00:02:43.393 TEST_HEADER include/spdk/uuid.h 00:02:43.393 TEST_HEADER include/spdk/version.h 00:02:43.393 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.393 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.393 TEST_HEADER include/spdk/vhost.h 00:02:43.393 TEST_HEADER include/spdk/vmd.h 00:02:43.393 TEST_HEADER include/spdk/xor.h 00:02:43.393 TEST_HEADER include/spdk/zipf.h 00:02:43.393 CXX test/cpp_headers/accel.o 00:02:43.393 CXX test/cpp_headers/accel_module.o 00:02:43.393 CXX test/cpp_headers/assert.o 00:02:43.393 CXX test/cpp_headers/barrier.o 00:02:43.393 CXX test/cpp_headers/base64.o 00:02:43.393 CXX test/cpp_headers/bdev.o 00:02:43.393 CXX test/cpp_headers/bdev_module.o 00:02:43.393 CXX test/cpp_headers/bdev_zone.o 00:02:43.393 CXX test/cpp_headers/blob_bdev.o 00:02:43.393 CXX test/cpp_headers/bit_array.o 00:02:43.393 CXX test/cpp_headers/bit_pool.o 00:02:43.393 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.393 CXX test/cpp_headers/blobfs.o 00:02:43.393 CXX test/cpp_headers/blob.o 00:02:43.393 CXX test/cpp_headers/conf.o 00:02:43.393 CXX test/cpp_headers/config.o 00:02:43.393 CXX test/cpp_headers/cpuset.o 00:02:43.393 CXX test/cpp_headers/crc16.o 00:02:43.393 CXX test/cpp_headers/crc32.o 00:02:43.393 CXX test/cpp_headers/crc64.o 00:02:43.393 CXX test/cpp_headers/dma.o 00:02:43.393 CXX test/cpp_headers/dif.o 00:02:43.393 CXX test/cpp_headers/endian.o 00:02:43.393 CXX test/cpp_headers/env_dpdk.o 00:02:43.393 CXX test/cpp_headers/env.o 00:02:43.393 CXX test/cpp_headers/fd_group.o 00:02:43.393 CXX test/cpp_headers/event.o 00:02:43.393 CXX test/cpp_headers/fd.o 00:02:43.393 CXX test/cpp_headers/file.o 00:02:43.393 CXX test/cpp_headers/fsdev.o 00:02:43.393 CXX test/cpp_headers/fsdev_module.o 00:02:43.393 CXX test/cpp_headers/ftl.o 00:02:43.393 CXX test/cpp_headers/fuse_dispatcher.o 00:02:43.393 CXX test/cpp_headers/gpt_spec.o 00:02:43.393 CXX test/cpp_headers/hexlify.o 00:02:43.393 CXX test/cpp_headers/histogram_data.o 00:02:43.393 CXX test/cpp_headers/idxd.o 00:02:43.393 CXX test/cpp_headers/idxd_spec.o 00:02:43.393 CXX test/cpp_headers/ioat_spec.o 00:02:43.393 CXX test/cpp_headers/init.o 00:02:43.393 CXX test/cpp_headers/ioat.o 00:02:43.393 CXX test/cpp_headers/json.o 00:02:43.393 CXX test/cpp_headers/jsonrpc.o 00:02:43.393 CXX test/cpp_headers/keyring.o 00:02:43.393 CXX test/cpp_headers/iscsi_spec.o 00:02:43.393 CXX test/cpp_headers/keyring_module.o 00:02:43.393 CXX test/cpp_headers/likely.o 00:02:43.393 CXX test/cpp_headers/log.o 00:02:43.393 CXX test/cpp_headers/md5.o 00:02:43.393 CXX test/cpp_headers/lvol.o 00:02:43.393 CXX test/cpp_headers/nbd.o 00:02:43.393 CXX test/cpp_headers/memory.o 00:02:43.393 CXX test/cpp_headers/mmio.o 00:02:43.393 CXX test/cpp_headers/net.o 00:02:43.393 CXX test/cpp_headers/notify.o 00:02:43.393 CXX test/cpp_headers/nvme_spec.o 00:02:43.393 CXX test/cpp_headers/nvme.o 00:02:43.393 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.393 CXX test/cpp_headers/nvme_intel.o 00:02:43.393 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.393 CXX test/cpp_headers/nvmf.o 00:02:43.393 CXX test/cpp_headers/nvme_zns.o 00:02:43.393 CXX test/cpp_headers/nvmf_spec.o 00:02:43.393 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.393 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.393 CXX test/cpp_headers/nvmf_transport.o 00:02:43.393 CXX test/cpp_headers/pci_ids.o 00:02:43.393 CXX test/cpp_headers/opal.o 00:02:43.393 CC examples/util/zipf/zipf.o 00:02:43.393 CXX test/cpp_headers/opal_spec.o 00:02:43.393 CXX test/cpp_headers/pipe.o 00:02:43.393 CXX test/cpp_headers/queue.o 00:02:43.393 CXX test/cpp_headers/reduce.o 00:02:43.661 CXX test/cpp_headers/rpc.o 00:02:43.661 CXX test/cpp_headers/scheduler.o 00:02:43.661 CXX test/cpp_headers/stdinc.o 00:02:43.661 CXX test/cpp_headers/scsi.o 00:02:43.661 CXX test/cpp_headers/scsi_spec.o 00:02:43.661 CXX test/cpp_headers/sock.o 00:02:43.661 CXX test/cpp_headers/trace.o 00:02:43.661 CXX test/cpp_headers/thread.o 00:02:43.661 CXX test/cpp_headers/string.o 00:02:43.661 CXX test/cpp_headers/trace_parser.o 00:02:43.661 CXX test/cpp_headers/tree.o 00:02:43.661 CXX test/cpp_headers/ublk.o 00:02:43.661 LINK spdk_lspci 00:02:43.661 CXX test/cpp_headers/util.o 00:02:43.661 CXX test/cpp_headers/version.o 00:02:43.661 CXX test/cpp_headers/uuid.o 00:02:43.661 CC test/app/stub/stub.o 00:02:43.661 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.661 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.661 CXX test/cpp_headers/vhost.o 00:02:43.661 CC examples/ioat/verify/verify.o 00:02:43.661 CC app/fio/nvme/fio_plugin.o 00:02:43.661 CC test/dma/test_dma/test_dma.o 00:02:43.661 CXX test/cpp_headers/vmd.o 00:02:43.661 CXX test/cpp_headers/xor.o 00:02:43.661 CC test/thread/poller_perf/poller_perf.o 00:02:43.661 CC examples/ioat/perf/perf.o 00:02:43.661 CXX test/cpp_headers/zipf.o 00:02:43.661 CC test/app/jsoncat/jsoncat.o 00:02:43.661 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.661 CC test/app/histogram_perf/histogram_perf.o 00:02:43.661 CC test/env/pci/pci_ut.o 00:02:43.661 CC test/env/vtophys/vtophys.o 00:02:43.661 CC test/env/memory/memory_ut.o 00:02:43.661 CC app/fio/bdev/fio_plugin.o 00:02:43.661 CC test/app/bdev_svc/bdev_svc.o 00:02:43.661 LINK rpc_client_test 00:02:43.932 LINK spdk_nvme_discover 00:02:43.933 LINK interrupt_tgt 00:02:43.933 LINK nvmf_tgt 00:02:43.933 LINK iscsi_tgt 00:02:43.933 LINK spdk_tgt 00:02:44.197 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.197 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.197 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.197 LINK histogram_perf 00:02:44.197 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.197 LINK spdk_trace_record 00:02:44.197 LINK verify 00:02:44.197 CC test/env/mem_callbacks/mem_callbacks.o 00:02:44.457 LINK bdev_svc 00:02:44.457 LINK zipf 00:02:44.718 LINK vtophys 00:02:44.718 LINK poller_perf 00:02:44.718 LINK spdk_dd 00:02:44.718 LINK jsoncat 00:02:44.718 LINK env_dpdk_post_init 00:02:44.718 LINK stub 00:02:44.718 LINK spdk_trace 00:02:44.718 LINK pci_ut 00:02:44.718 LINK ioat_perf 00:02:44.718 LINK test_dma 00:02:44.718 LINK spdk_nvme_perf 00:02:44.979 LINK nvme_fuzz 00:02:44.979 LINK vhost_fuzz 00:02:44.979 LINK spdk_nvme_identify 00:02:44.979 LINK spdk_bdev 00:02:44.979 LINK spdk_nvme 00:02:44.979 LINK spdk_top 00:02:45.241 CC app/vhost/vhost.o 00:02:45.241 LINK mem_callbacks 00:02:45.241 CC test/event/reactor_perf/reactor_perf.o 00:02:45.241 CC examples/idxd/perf/perf.o 00:02:45.241 CC test/event/event_perf/event_perf.o 00:02:45.241 CC examples/sock/hello_world/hello_sock.o 00:02:45.241 CC examples/vmd/lsvmd/lsvmd.o 00:02:45.241 CC test/event/reactor/reactor.o 00:02:45.241 CC examples/vmd/led/led.o 00:02:45.241 CC test/event/app_repeat/app_repeat.o 00:02:45.241 CC examples/thread/thread/thread_ex.o 00:02:45.241 CC test/event/scheduler/scheduler.o 00:02:45.503 LINK reactor_perf 00:02:45.503 CC test/nvme/e2edp/nvme_dp.o 00:02:45.503 LINK lsvmd 00:02:45.503 CC test/nvme/err_injection/err_injection.o 00:02:45.503 CC test/nvme/overhead/overhead.o 00:02:45.503 CC test/nvme/fdp/fdp.o 00:02:45.503 LINK reactor 00:02:45.503 LINK event_perf 00:02:45.503 CC test/nvme/aer/aer.o 00:02:45.503 CC test/nvme/reserve/reserve.o 00:02:45.503 CC test/nvme/sgl/sgl.o 00:02:45.503 LINK vhost 00:02:45.503 CC test/nvme/reset/reset.o 00:02:45.503 LINK led 00:02:45.503 CC test/nvme/fused_ordering/fused_ordering.o 00:02:45.503 CC test/nvme/startup/startup.o 00:02:45.503 CC test/nvme/cuse/cuse.o 00:02:45.503 CC test/nvme/connect_stress/connect_stress.o 00:02:45.503 CC test/nvme/compliance/nvme_compliance.o 00:02:45.503 CC test/nvme/simple_copy/simple_copy.o 00:02:45.503 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:45.503 CC test/nvme/boot_partition/boot_partition.o 00:02:45.503 LINK app_repeat 00:02:45.503 CC test/blobfs/mkfs/mkfs.o 00:02:45.503 CC test/accel/dif/dif.o 00:02:45.503 LINK hello_sock 00:02:45.503 LINK scheduler 00:02:45.503 LINK thread 00:02:45.503 LINK idxd_perf 00:02:45.503 CC test/lvol/esnap/esnap.o 00:02:45.503 LINK err_injection 00:02:45.764 LINK startup 00:02:45.764 LINK connect_stress 00:02:45.764 LINK boot_partition 00:02:45.764 LINK reserve 00:02:45.764 LINK doorbell_aers 00:02:45.764 LINK fused_ordering 00:02:45.764 LINK memory_ut 00:02:45.764 LINK simple_copy 00:02:45.764 LINK reset 00:02:45.764 LINK nvme_dp 00:02:45.764 LINK aer 00:02:45.764 LINK mkfs 00:02:45.764 LINK overhead 00:02:45.764 LINK sgl 00:02:45.764 LINK nvme_compliance 00:02:45.764 LINK fdp 00:02:46.026 LINK iscsi_fuzz 00:02:46.026 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.026 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.026 CC examples/nvme/abort/abort.o 00:02:46.026 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:46.026 CC examples/nvme/hello_world/hello_world.o 00:02:46.026 CC examples/nvme/reconnect/reconnect.o 00:02:46.026 CC examples/nvme/hotplug/hotplug.o 00:02:46.026 CC examples/nvme/arbitration/arbitration.o 00:02:46.026 CC examples/accel/perf/accel_perf.o 00:02:46.026 LINK dif 00:02:46.286 CC examples/blob/hello_world/hello_blob.o 00:02:46.286 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:46.286 CC examples/blob/cli/blobcli.o 00:02:46.286 LINK pmr_persistence 00:02:46.286 LINK cmb_copy 00:02:46.286 LINK hello_world 00:02:46.286 LINK hotplug 00:02:46.286 LINK arbitration 00:02:46.286 LINK abort 00:02:46.286 LINK reconnect 00:02:46.547 LINK hello_blob 00:02:46.547 LINK nvme_manage 00:02:46.547 LINK hello_fsdev 00:02:46.547 LINK accel_perf 00:02:46.807 LINK cuse 00:02:46.807 LINK blobcli 00:02:46.807 CC test/bdev/bdevio/bdevio.o 00:02:47.067 LINK bdevio 00:02:47.067 CC examples/bdev/hello_world/hello_bdev.o 00:02:47.067 CC examples/bdev/bdevperf/bdevperf.o 00:02:47.638 LINK hello_bdev 00:02:47.899 LINK bdevperf 00:02:48.470 CC examples/nvmf/nvmf/nvmf.o 00:02:49.042 LINK nvmf 00:02:49.985 LINK esnap 00:02:50.556 00:02:50.556 real 0m55.949s 00:02:50.556 user 8m7.786s 00:02:50.556 sys 5m35.463s 00:02:50.556 18:53:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:50.556 18:53:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.556 ************************************ 00:02:50.556 END TEST make 00:02:50.556 ************************************ 00:02:50.556 18:53:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.556 18:53:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.556 18:53:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.556 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.556 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.556 18:53:07 -- pm/common@44 -- $ pid=2612544 00:02:50.556 18:53:07 -- pm/common@50 -- $ kill -TERM 2612544 00:02:50.557 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.557 18:53:07 -- pm/common@44 -- $ pid=2612545 00:02:50.557 18:53:07 -- pm/common@50 -- $ kill -TERM 2612545 00:02:50.557 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.557 18:53:07 -- pm/common@44 -- $ pid=2612547 00:02:50.557 18:53:07 -- pm/common@50 -- $ kill -TERM 2612547 00:02:50.557 18:53:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.557 18:53:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.557 18:53:07 -- pm/common@44 -- $ pid=2612571 00:02:50.557 18:53:07 -- pm/common@50 -- $ sudo -E kill -TERM 2612571 00:02:50.557 18:53:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:50.557 18:53:07 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:50.557 18:53:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:50.557 18:53:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:50.557 18:53:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:50.818 18:53:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:50.818 18:53:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.818 18:53:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.818 18:53:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.818 18:53:07 -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.818 18:53:07 -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.818 18:53:07 -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.818 18:53:07 -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.818 18:53:07 -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.818 18:53:07 -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.818 18:53:07 -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.818 18:53:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.818 18:53:07 -- scripts/common.sh@344 -- # case "$op" in 00:02:50.818 18:53:07 -- scripts/common.sh@345 -- # : 1 00:02:50.818 18:53:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.818 18:53:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.818 18:53:07 -- scripts/common.sh@365 -- # decimal 1 00:02:50.818 18:53:07 -- scripts/common.sh@353 -- # local d=1 00:02:50.818 18:53:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.818 18:53:07 -- scripts/common.sh@355 -- # echo 1 00:02:50.818 18:53:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.818 18:53:07 -- scripts/common.sh@366 -- # decimal 2 00:02:50.818 18:53:07 -- scripts/common.sh@353 -- # local d=2 00:02:50.818 18:53:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.818 18:53:07 -- scripts/common.sh@355 -- # echo 2 00:02:50.818 18:53:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.818 18:53:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.818 18:53:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.818 18:53:07 -- scripts/common.sh@368 -- # return 0 00:02:50.818 18:53:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.818 18:53:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:50.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.818 --rc genhtml_branch_coverage=1 00:02:50.818 --rc genhtml_function_coverage=1 00:02:50.818 --rc genhtml_legend=1 00:02:50.818 --rc geninfo_all_blocks=1 00:02:50.818 --rc geninfo_unexecuted_blocks=1 00:02:50.818 00:02:50.818 ' 00:02:50.818 18:53:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:50.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.818 --rc genhtml_branch_coverage=1 00:02:50.818 --rc genhtml_function_coverage=1 00:02:50.818 --rc genhtml_legend=1 00:02:50.818 --rc geninfo_all_blocks=1 00:02:50.818 --rc geninfo_unexecuted_blocks=1 00:02:50.818 00:02:50.818 ' 00:02:50.818 18:53:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:50.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.818 --rc genhtml_branch_coverage=1 00:02:50.818 --rc genhtml_function_coverage=1 00:02:50.818 --rc genhtml_legend=1 00:02:50.818 --rc geninfo_all_blocks=1 00:02:50.818 --rc geninfo_unexecuted_blocks=1 00:02:50.818 00:02:50.818 ' 00:02:50.818 18:53:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:50.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.818 --rc genhtml_branch_coverage=1 00:02:50.818 --rc genhtml_function_coverage=1 00:02:50.818 --rc genhtml_legend=1 00:02:50.818 --rc geninfo_all_blocks=1 00:02:50.818 --rc geninfo_unexecuted_blocks=1 00:02:50.818 00:02:50.818 ' 00:02:50.818 18:53:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.818 18:53:07 -- nvmf/common.sh@7 -- # uname -s 00:02:50.818 18:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.818 18:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.818 18:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.818 18:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.818 18:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.818 18:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.818 18:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.818 18:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.818 18:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.818 18:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.818 18:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:50.818 18:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:50.818 18:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.818 18:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.818 18:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.818 18:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.818 18:53:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.818 18:53:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:50.818 18:53:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.818 18:53:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.818 18:53:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.818 18:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.818 18:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.819 18:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.819 18:53:07 -- paths/export.sh@5 -- # export PATH 00:02:50.819 18:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.819 18:53:07 -- nvmf/common.sh@51 -- # : 0 00:02:50.819 18:53:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:50.819 18:53:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:50.819 18:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.819 18:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.819 18:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.819 18:53:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:50.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:50.819 18:53:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:50.819 18:53:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:50.819 18:53:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:50.819 18:53:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.819 18:53:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.819 18:53:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.819 18:53:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.819 18:53:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.819 18:53:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.819 18:53:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.819 18:53:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.819 18:53:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.819 18:53:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.819 18:53:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2678070 00:02:50.819 18:53:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.819 18:53:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.819 18:53:07 -- pm/common@17 -- # local monitor 00:02:50.819 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.819 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.819 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.819 18:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.819 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.819 18:53:07 -- pm/common@25 -- # sleep 1 00:02:50.819 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.819 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.819 18:53:07 -- pm/common@21 -- # date +%s 00:02:50.819 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.819 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.819 18:53:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.819 18:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732643587 00:02:50.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-vmstat.pm.log 00:02:50.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-cpu-load.pm.log 00:02:50.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-cpu-temp.pm.log 00:02:50.819 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732643587_collect-bmc-pm.bmc.pm.log 00:02:51.762 18:53:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.762 18:53:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.762 18:53:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.762 18:53:08 -- common/autotest_common.sh@10 -- # set +x 00:02:51.762 18:53:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.762 18:53:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:51.762 18:53:08 -- common/autotest_common.sh@10 -- # set +x 00:02:51.762 18:53:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:51.762 18:53:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.762 18:53:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.762 18:53:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.762 18:53:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.762 18:53:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.762 18:53:08 -- common/autotest_common.sh@1457 -- # uname 00:02:51.762 18:53:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:51.762 18:53:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.762 18:53:08 -- common/autotest_common.sh@1477 -- # uname 00:02:51.762 18:53:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:51.762 18:53:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:51.762 18:53:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:52.022 lcov: LCOV version 1.15 00:02:52.022 18:53:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:06.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:06.935 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:25.051 18:53:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:25.051 18:53:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.051 18:53:39 -- common/autotest_common.sh@10 -- # set +x 00:03:25.051 18:53:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:25.051 18:53:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.004 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:26.004 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:26.004 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:26.344 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:26.344 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:26.344 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:26.609 18:53:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:26.609 18:53:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:26.609 18:53:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:26.609 18:53:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:26.609 18:53:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:26.609 18:53:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:26.609 18:53:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:26.609 18:53:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.609 18:53:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:26.609 18:53:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:26.609 18:53:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:26.609 18:53:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:26.609 18:53:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:26.609 18:53:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:26.609 18:53:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:26.609 No valid GPT data, bailing 00:03:26.609 18:53:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:26.609 18:53:43 -- scripts/common.sh@394 -- # pt= 00:03:26.609 18:53:43 -- scripts/common.sh@395 -- # return 1 00:03:26.609 18:53:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:26.609 1+0 records in 00:03:26.609 1+0 records out 00:03:26.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00161585 s, 649 MB/s 00:03:26.609 18:53:43 -- spdk/autotest.sh@105 -- # sync 00:03:26.609 18:53:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:26.609 18:53:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:26.609 18:53:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:36.635 18:53:52 -- spdk/autotest.sh@111 -- # uname -s 00:03:36.635 18:53:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:36.635 18:53:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:36.635 18:53:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.544 Hugepages 00:03:38.544 node hugesize free / total 00:03:38.544 node0 1048576kB 0 / 0 00:03:38.544 node0 2048kB 0 / 0 00:03:38.544 node1 1048576kB 0 / 0 00:03:38.805 node1 2048kB 0 / 0 00:03:38.805 00:03:38.805 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.805 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:38.805 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:38.805 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:38.805 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:38.805 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:38.805 18:53:55 -- spdk/autotest.sh@117 -- # uname -s 00:03:38.805 18:53:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:38.805 18:53:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:38.805 18:53:55 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.014 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.014 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.397 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.658 18:54:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:45.600 18:54:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:45.600 18:54:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:45.600 18:54:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.600 18:54:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:45.600 18:54:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:45.600 18:54:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:45.600 18:54:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.600 18:54:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:45.600 18:54:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:45.861 18:54:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:45.861 18:54:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:45.861 18:54:02 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.164 Waiting for block devices as requested 00:03:49.164 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:49.164 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:49.426 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:49.426 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:49.426 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:49.687 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:49.687 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:49.687 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:49.949 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:49.949 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:50.211 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:50.211 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:50.211 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:50.472 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:50.472 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:50.472 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:50.733 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:50.993 18:54:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:50.993 18:54:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:50.993 18:54:08 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:50.993 18:54:08 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:50.993 18:54:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:50.993 18:54:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:50.994 18:54:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:50.994 18:54:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:50.994 18:54:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:50.994 18:54:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:50.994 18:54:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:50.994 18:54:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:50.994 18:54:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:50.994 18:54:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:50.994 18:54:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:50.994 18:54:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:50.994 18:54:08 -- common/autotest_common.sh@1543 -- # continue 00:03:50.994 18:54:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:50.994 18:54:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.994 18:54:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.994 18:54:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:50.994 18:54:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.994 18:54:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.994 18:54:08 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.199 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.199 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.199 18:54:12 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.199 18:54:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.199 18:54:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.199 18:54:12 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.199 18:54:12 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:55.199 18:54:12 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.199 18:54:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:55.199 18:54:12 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:55.199 18:54:12 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:55.199 18:54:12 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.199 18:54:12 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:55.199 18:54:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:55.199 18:54:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:55.199 18:54:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.199 18:54:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.199 18:54:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:55.199 18:54:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:55.199 18:54:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:55.199 18:54:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:55.199 18:54:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:55.199 18:54:12 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:55.199 18:54:12 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:55.199 18:54:12 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:55.199 18:54:12 -- common/autotest_common.sh@1572 -- # return 0 00:03:55.199 18:54:12 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:55.199 18:54:12 -- common/autotest_common.sh@1580 -- # return 0 00:03:55.199 18:54:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:55.199 18:54:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:55.199 18:54:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.199 18:54:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.199 18:54:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:55.199 18:54:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.199 18:54:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.199 18:54:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:55.199 18:54:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:55.199 18:54:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.199 18:54:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.199 18:54:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.461 ************************************ 00:03:55.461 START TEST env 00:03:55.461 ************************************ 00:03:55.461 18:54:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:55.461 * Looking for test storage... 00:03:55.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:55.461 18:54:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.461 18:54:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.461 18:54:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.461 18:54:12 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.461 18:54:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.461 18:54:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.461 18:54:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.461 18:54:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.461 18:54:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.461 18:54:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.461 18:54:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.461 18:54:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.461 18:54:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.461 18:54:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.461 18:54:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.461 18:54:12 env -- scripts/common.sh@344 -- # case "$op" in 00:03:55.461 18:54:12 env -- scripts/common.sh@345 -- # : 1 00:03:55.461 18:54:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.461 18:54:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.461 18:54:12 env -- scripts/common.sh@365 -- # decimal 1 00:03:55.461 18:54:12 env -- scripts/common.sh@353 -- # local d=1 00:03:55.461 18:54:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.461 18:54:12 env -- scripts/common.sh@355 -- # echo 1 00:03:55.461 18:54:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.461 18:54:12 env -- scripts/common.sh@366 -- # decimal 2 00:03:55.461 18:54:12 env -- scripts/common.sh@353 -- # local d=2 00:03:55.461 18:54:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.461 18:54:12 env -- scripts/common.sh@355 -- # echo 2 00:03:55.461 18:54:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.461 18:54:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.461 18:54:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.462 18:54:12 env -- scripts/common.sh@368 -- # return 0 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.462 --rc genhtml_branch_coverage=1 00:03:55.462 --rc genhtml_function_coverage=1 00:03:55.462 --rc genhtml_legend=1 00:03:55.462 --rc geninfo_all_blocks=1 00:03:55.462 --rc geninfo_unexecuted_blocks=1 00:03:55.462 00:03:55.462 ' 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.462 --rc genhtml_branch_coverage=1 00:03:55.462 --rc genhtml_function_coverage=1 00:03:55.462 --rc genhtml_legend=1 00:03:55.462 --rc geninfo_all_blocks=1 00:03:55.462 --rc geninfo_unexecuted_blocks=1 00:03:55.462 00:03:55.462 ' 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.462 --rc genhtml_branch_coverage=1 00:03:55.462 --rc genhtml_function_coverage=1 00:03:55.462 --rc genhtml_legend=1 00:03:55.462 --rc geninfo_all_blocks=1 00:03:55.462 --rc geninfo_unexecuted_blocks=1 00:03:55.462 00:03:55.462 ' 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.462 --rc genhtml_branch_coverage=1 00:03:55.462 --rc genhtml_function_coverage=1 00:03:55.462 --rc genhtml_legend=1 00:03:55.462 --rc geninfo_all_blocks=1 00:03:55.462 --rc geninfo_unexecuted_blocks=1 00:03:55.462 00:03:55.462 ' 00:03:55.462 18:54:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.462 18:54:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.462 18:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.462 ************************************ 00:03:55.462 START TEST env_memory 00:03:55.462 ************************************ 00:03:55.462 18:54:12 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:55.462 00:03:55.462 00:03:55.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.462 http://cunit.sourceforge.net/ 00:03:55.462 00:03:55.462 00:03:55.462 Suite: memory 00:03:55.723 Test: alloc and free memory map ...[2024-11-26 18:54:12.705436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:55.723 passed 00:03:55.723 Test: mem map translation ...[2024-11-26 18:54:12.731063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:55.723 [2024-11-26 18:54:12.731105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:55.723 [2024-11-26 18:54:12.731152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:55.723 [2024-11-26 18:54:12.731162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:55.723 passed 00:03:55.723 Test: mem map registration ...[2024-11-26 18:54:12.786321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:55.723 [2024-11-26 18:54:12.786346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:55.723 passed 00:03:55.723 Test: mem map adjacent registrations ...passed 00:03:55.723 00:03:55.723 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.723 suites 1 1 n/a 0 0 00:03:55.723 tests 4 4 4 0 0 00:03:55.723 asserts 152 152 152 0 n/a 00:03:55.723 00:03:55.723 Elapsed time = 0.194 seconds 00:03:55.723 00:03:55.723 real 0m0.209s 00:03:55.723 user 0m0.195s 00:03:55.723 sys 0m0.013s 00:03:55.723 18:54:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.723 18:54:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:55.723 ************************************ 00:03:55.723 END TEST env_memory 00:03:55.723 ************************************ 00:03:55.723 18:54:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:55.723 18:54:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.723 18:54:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.723 18:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.987 ************************************ 00:03:55.987 START TEST env_vtophys 00:03:55.987 ************************************ 00:03:55.987 18:54:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:55.987 EAL: lib.eal log level changed from notice to debug 00:03:55.987 EAL: Detected lcore 0 as core 0 on socket 0 00:03:55.987 EAL: Detected lcore 1 as core 1 on socket 0 00:03:55.987 EAL: Detected lcore 2 as core 2 on socket 0 00:03:55.987 EAL: Detected lcore 3 as core 3 on socket 0 00:03:55.987 EAL: Detected lcore 4 as core 4 on socket 0 00:03:55.987 EAL: Detected lcore 5 as core 5 on socket 0 00:03:55.987 EAL: Detected lcore 6 as core 6 on socket 0 00:03:55.987 EAL: Detected lcore 7 as core 7 on socket 0 00:03:55.987 EAL: Detected lcore 8 as core 8 on socket 0 00:03:55.987 EAL: Detected lcore 9 as core 9 on socket 0 00:03:55.987 EAL: Detected lcore 10 as core 10 on socket 0 00:03:55.987 EAL: Detected lcore 11 as core 11 on socket 0 00:03:55.987 EAL: Detected lcore 12 as core 12 on socket 0 00:03:55.987 EAL: Detected lcore 13 as core 13 on socket 0 00:03:55.987 EAL: Detected lcore 14 as core 14 on socket 0 00:03:55.987 EAL: Detected lcore 15 as core 15 on socket 0 00:03:55.987 EAL: Detected lcore 16 as core 16 on socket 0 00:03:55.987 EAL: Detected lcore 17 as core 17 on socket 0 00:03:55.987 EAL: Detected lcore 18 as core 18 on socket 0 00:03:55.987 EAL: Detected lcore 19 as core 19 on socket 0 00:03:55.987 EAL: Detected lcore 20 as core 20 on socket 0 00:03:55.987 EAL: Detected lcore 21 as core 21 on socket 0 00:03:55.987 EAL: Detected lcore 22 as core 22 on socket 0 00:03:55.987 EAL: Detected lcore 23 as core 23 on socket 0 00:03:55.987 EAL: Detected lcore 24 as core 24 on socket 0 00:03:55.987 EAL: Detected lcore 25 as core 25 on socket 0 00:03:55.987 EAL: Detected lcore 26 as core 26 on socket 0 00:03:55.987 EAL: Detected lcore 27 as core 27 on socket 0 00:03:55.987 EAL: Detected lcore 28 as core 28 on socket 0 00:03:55.987 EAL: Detected lcore 29 as core 29 on socket 0 00:03:55.987 EAL: Detected lcore 30 as core 30 on socket 0 00:03:55.987 EAL: Detected lcore 31 as core 31 on socket 0 00:03:55.987 EAL: Detected lcore 32 as core 32 on socket 0 00:03:55.987 EAL: Detected lcore 33 as core 33 on socket 0 00:03:55.987 EAL: Detected lcore 34 as core 34 on socket 0 00:03:55.987 EAL: Detected lcore 35 as core 35 on socket 0 00:03:55.987 EAL: Detected lcore 36 as core 0 on socket 1 00:03:55.987 EAL: Detected lcore 37 as core 1 on socket 1 00:03:55.987 EAL: Detected lcore 38 as core 2 on socket 1 00:03:55.987 EAL: Detected lcore 39 as core 3 on socket 1 00:03:55.987 EAL: Detected lcore 40 as core 4 on socket 1 00:03:55.987 EAL: Detected lcore 41 as core 5 on socket 1 00:03:55.987 EAL: Detected lcore 42 as core 6 on socket 1 00:03:55.987 EAL: Detected lcore 43 as core 7 on socket 1 00:03:55.987 EAL: Detected lcore 44 as core 8 on socket 1 00:03:55.987 EAL: Detected lcore 45 as core 9 on socket 1 00:03:55.987 EAL: Detected lcore 46 as core 10 on socket 1 00:03:55.987 EAL: Detected lcore 47 as core 11 on socket 1 00:03:55.987 EAL: Detected lcore 48 as core 12 on socket 1 00:03:55.987 EAL: Detected lcore 49 as core 13 on socket 1 00:03:55.987 EAL: Detected lcore 50 as core 14 on socket 1 00:03:55.987 EAL: Detected lcore 51 as core 15 on socket 1 00:03:55.987 EAL: Detected lcore 52 as core 16 on socket 1 00:03:55.987 EAL: Detected lcore 53 as core 17 on socket 1 00:03:55.987 EAL: Detected lcore 54 as core 18 on socket 1 00:03:55.987 EAL: Detected lcore 55 as core 19 on socket 1 00:03:55.987 EAL: Detected lcore 56 as core 20 on socket 1 00:03:55.987 EAL: Detected lcore 57 as core 21 on socket 1 00:03:55.987 EAL: Detected lcore 58 as core 22 on socket 1 00:03:55.987 EAL: Detected lcore 59 as core 23 on socket 1 00:03:55.987 EAL: Detected lcore 60 as core 24 on socket 1 00:03:55.987 EAL: Detected lcore 61 as core 25 on socket 1 00:03:55.987 EAL: Detected lcore 62 as core 26 on socket 1 00:03:55.987 EAL: Detected lcore 63 as core 27 on socket 1 00:03:55.987 EAL: Detected lcore 64 as core 28 on socket 1 00:03:55.987 EAL: Detected lcore 65 as core 29 on socket 1 00:03:55.987 EAL: Detected lcore 66 as core 30 on socket 1 00:03:55.987 EAL: Detected lcore 67 as core 31 on socket 1 00:03:55.987 EAL: Detected lcore 68 as core 32 on socket 1 00:03:55.987 EAL: Detected lcore 69 as core 33 on socket 1 00:03:55.987 EAL: Detected lcore 70 as core 34 on socket 1 00:03:55.987 EAL: Detected lcore 71 as core 35 on socket 1 00:03:55.987 EAL: Detected lcore 72 as core 0 on socket 0 00:03:55.987 EAL: Detected lcore 73 as core 1 on socket 0 00:03:55.987 EAL: Detected lcore 74 as core 2 on socket 0 00:03:55.987 EAL: Detected lcore 75 as core 3 on socket 0 00:03:55.987 EAL: Detected lcore 76 as core 4 on socket 0 00:03:55.987 EAL: Detected lcore 77 as core 5 on socket 0 00:03:55.987 EAL: Detected lcore 78 as core 6 on socket 0 00:03:55.987 EAL: Detected lcore 79 as core 7 on socket 0 00:03:55.987 EAL: Detected lcore 80 as core 8 on socket 0 00:03:55.987 EAL: Detected lcore 81 as core 9 on socket 0 00:03:55.987 EAL: Detected lcore 82 as core 10 on socket 0 00:03:55.987 EAL: Detected lcore 83 as core 11 on socket 0 00:03:55.987 EAL: Detected lcore 84 as core 12 on socket 0 00:03:55.987 EAL: Detected lcore 85 as core 13 on socket 0 00:03:55.987 EAL: Detected lcore 86 as core 14 on socket 0 00:03:55.987 EAL: Detected lcore 87 as core 15 on socket 0 00:03:55.987 EAL: Detected lcore 88 as core 16 on socket 0 00:03:55.987 EAL: Detected lcore 89 as core 17 on socket 0 00:03:55.987 EAL: Detected lcore 90 as core 18 on socket 0 00:03:55.987 EAL: Detected lcore 91 as core 19 on socket 0 00:03:55.987 EAL: Detected lcore 92 as core 20 on socket 0 00:03:55.987 EAL: Detected lcore 93 as core 21 on socket 0 00:03:55.987 EAL: Detected lcore 94 as core 22 on socket 0 00:03:55.987 EAL: Detected lcore 95 as core 23 on socket 0 00:03:55.987 EAL: Detected lcore 96 as core 24 on socket 0 00:03:55.987 EAL: Detected lcore 97 as core 25 on socket 0 00:03:55.987 EAL: Detected lcore 98 as core 26 on socket 0 00:03:55.987 EAL: Detected lcore 99 as core 27 on socket 0 00:03:55.987 EAL: Detected lcore 100 as core 28 on socket 0 00:03:55.987 EAL: Detected lcore 101 as core 29 on socket 0 00:03:55.987 EAL: Detected lcore 102 as core 30 on socket 0 00:03:55.987 EAL: Detected lcore 103 as core 31 on socket 0 00:03:55.987 EAL: Detected lcore 104 as core 32 on socket 0 00:03:55.987 EAL: Detected lcore 105 as core 33 on socket 0 00:03:55.987 EAL: Detected lcore 106 as core 34 on socket 0 00:03:55.987 EAL: Detected lcore 107 as core 35 on socket 0 00:03:55.987 EAL: Detected lcore 108 as core 0 on socket 1 00:03:55.987 EAL: Detected lcore 109 as core 1 on socket 1 00:03:55.987 EAL: Detected lcore 110 as core 2 on socket 1 00:03:55.987 EAL: Detected lcore 111 as core 3 on socket 1 00:03:55.987 EAL: Detected lcore 112 as core 4 on socket 1 00:03:55.987 EAL: Detected lcore 113 as core 5 on socket 1 00:03:55.987 EAL: Detected lcore 114 as core 6 on socket 1 00:03:55.987 EAL: Detected lcore 115 as core 7 on socket 1 00:03:55.987 EAL: Detected lcore 116 as core 8 on socket 1 00:03:55.987 EAL: Detected lcore 117 as core 9 on socket 1 00:03:55.987 EAL: Detected lcore 118 as core 10 on socket 1 00:03:55.987 EAL: Detected lcore 119 as core 11 on socket 1 00:03:55.987 EAL: Detected lcore 120 as core 12 on socket 1 00:03:55.987 EAL: Detected lcore 121 as core 13 on socket 1 00:03:55.987 EAL: Detected lcore 122 as core 14 on socket 1 00:03:55.987 EAL: Detected lcore 123 as core 15 on socket 1 00:03:55.987 EAL: Detected lcore 124 as core 16 on socket 1 00:03:55.987 EAL: Detected lcore 125 as core 17 on socket 1 00:03:55.987 EAL: Detected lcore 126 as core 18 on socket 1 00:03:55.987 EAL: Detected lcore 127 as core 19 on socket 1 00:03:55.987 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:55.987 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:55.987 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:55.987 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:55.987 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:55.987 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:55.987 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:55.987 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:55.987 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:55.987 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:55.987 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:55.987 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:55.987 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:55.987 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:55.987 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:55.987 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:55.987 EAL: Maximum logical cores by configuration: 128 00:03:55.987 EAL: Detected CPU lcores: 128 00:03:55.988 EAL: Detected NUMA nodes: 2 00:03:55.988 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:55.988 EAL: Detected shared linkage of DPDK 00:03:55.988 EAL: No shared files mode enabled, IPC will be disabled 00:03:55.988 EAL: Bus pci wants IOVA as 'DC' 00:03:55.988 EAL: Buses did not request a specific IOVA mode. 00:03:55.988 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:55.988 EAL: Selected IOVA mode 'VA' 00:03:55.988 EAL: Probing VFIO support... 00:03:55.988 EAL: IOMMU type 1 (Type 1) is supported 00:03:55.988 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:55.988 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:55.988 EAL: VFIO support initialized 00:03:55.988 EAL: Ask a virtual area of 0x2e000 bytes 00:03:55.988 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:55.988 EAL: Setting up physically contiguous memory... 00:03:55.988 EAL: Setting maximum number of open files to 524288 00:03:55.988 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:55.988 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:55.988 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:55.988 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:55.988 EAL: Ask a virtual area of 0x61000 bytes 00:03:55.988 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:55.988 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:55.988 EAL: Ask a virtual area of 0x400000000 bytes 00:03:55.988 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:55.988 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:55.988 EAL: Hugepages will be freed exactly as allocated. 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: TSC frequency is ~2400000 KHz 00:03:55.988 EAL: Main lcore 0 is ready (tid=7eff84e97a00;cpuset=[0]) 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 0 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 2MB 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:55.988 EAL: Mem event callback 'spdk:(nil)' registered 00:03:55.988 00:03:55.988 00:03:55.988 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.988 http://cunit.sourceforge.net/ 00:03:55.988 00:03:55.988 00:03:55.988 Suite: components_suite 00:03:55.988 Test: vtophys_malloc_test ...passed 00:03:55.988 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 4MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 4MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 6MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 6MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 10MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 10MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 18MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 18MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 34MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 34MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 66MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 66MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 130MB 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was shrunk by 130MB 00:03:55.988 EAL: Trying to obtain current memory policy. 00:03:55.988 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.988 EAL: Restoring previous memory policy: 4 00:03:55.988 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.988 EAL: request: mp_malloc_sync 00:03:55.988 EAL: No shared files mode enabled, IPC is disabled 00:03:55.988 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.249 EAL: request: mp_malloc_sync 00:03:56.249 EAL: No shared files mode enabled, IPC is disabled 00:03:56.249 EAL: Heap on socket 0 was shrunk by 258MB 00:03:56.249 EAL: Trying to obtain current memory policy. 00:03:56.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.249 EAL: Restoring previous memory policy: 4 00:03:56.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.249 EAL: request: mp_malloc_sync 00:03:56.249 EAL: No shared files mode enabled, IPC is disabled 00:03:56.249 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.249 EAL: request: mp_malloc_sync 00:03:56.249 EAL: No shared files mode enabled, IPC is disabled 00:03:56.249 EAL: Heap on socket 0 was shrunk by 514MB 00:03:56.249 EAL: Trying to obtain current memory policy. 00:03:56.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.510 EAL: Restoring previous memory policy: 4 00:03:56.510 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.510 EAL: request: mp_malloc_sync 00:03:56.510 EAL: No shared files mode enabled, IPC is disabled 00:03:56.510 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.510 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.771 EAL: request: mp_malloc_sync 00:03:56.771 EAL: No shared files mode enabled, IPC is disabled 00:03:56.771 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:56.771 passed 00:03:56.771 00:03:56.771 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.771 suites 1 1 n/a 0 0 00:03:56.771 tests 2 2 2 0 0 00:03:56.771 asserts 497 497 497 0 n/a 00:03:56.771 00:03:56.771 Elapsed time = 0.688 seconds 00:03:56.771 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.771 EAL: request: mp_malloc_sync 00:03:56.771 EAL: No shared files mode enabled, IPC is disabled 00:03:56.771 EAL: Heap on socket 0 was shrunk by 2MB 00:03:56.771 EAL: No shared files mode enabled, IPC is disabled 00:03:56.771 EAL: No shared files mode enabled, IPC is disabled 00:03:56.771 EAL: No shared files mode enabled, IPC is disabled 00:03:56.771 00:03:56.771 real 0m0.840s 00:03:56.771 user 0m0.430s 00:03:56.771 sys 0m0.379s 00:03:56.771 18:54:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.771 18:54:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:56.771 ************************************ 00:03:56.771 END TEST env_vtophys 00:03:56.771 ************************************ 00:03:56.771 18:54:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:56.771 18:54:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.771 18:54:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.771 18:54:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.771 ************************************ 00:03:56.771 START TEST env_pci 00:03:56.771 ************************************ 00:03:56.771 18:54:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:56.771 00:03:56.771 00:03:56.771 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.771 http://cunit.sourceforge.net/ 00:03:56.771 00:03:56.771 00:03:56.771 Suite: pci 00:03:56.771 Test: pci_hook ...[2024-11-26 18:54:13.872606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2698022 has claimed it 00:03:56.771 EAL: Cannot find device (10000:00:01.0) 00:03:56.771 EAL: Failed to attach device on primary process 00:03:56.771 passed 00:03:56.771 00:03:56.771 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.771 suites 1 1 n/a 0 0 00:03:56.771 tests 1 1 1 0 0 00:03:56.771 asserts 25 25 25 0 n/a 00:03:56.771 00:03:56.771 Elapsed time = 0.029 seconds 00:03:56.771 00:03:56.771 real 0m0.048s 00:03:56.771 user 0m0.010s 00:03:56.771 sys 0m0.038s 00:03:56.771 18:54:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.771 18:54:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:56.771 ************************************ 00:03:56.771 END TEST env_pci 00:03:56.771 ************************************ 00:03:56.771 18:54:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:56.771 18:54:13 env -- env/env.sh@15 -- # uname 00:03:56.771 18:54:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:56.771 18:54:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:56.771 18:54:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.771 18:54:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:56.771 18:54:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.771 18:54:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.032 ************************************ 00:03:57.032 START TEST env_dpdk_post_init 00:03:57.032 ************************************ 00:03:57.032 18:54:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.032 EAL: Detected CPU lcores: 128 00:03:57.032 EAL: Detected NUMA nodes: 2 00:03:57.032 EAL: Detected shared linkage of DPDK 00:03:57.032 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.032 EAL: Selected IOVA mode 'VA' 00:03:57.032 EAL: VFIO support initialized 00:03:57.032 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.032 EAL: Using IOMMU type 1 (Type 1) 00:03:57.293 EAL: Ignore mapping IO port bar(1) 00:03:57.293 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:57.293 EAL: Ignore mapping IO port bar(1) 00:03:57.554 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:57.554 EAL: Ignore mapping IO port bar(1) 00:03:57.815 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:57.815 EAL: Ignore mapping IO port bar(1) 00:03:58.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:58.076 EAL: Ignore mapping IO port bar(1) 00:03:58.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:58.338 EAL: Ignore mapping IO port bar(1) 00:03:58.338 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:58.599 EAL: Ignore mapping IO port bar(1) 00:03:58.599 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:58.860 EAL: Ignore mapping IO port bar(1) 00:03:58.860 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:59.121 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:59.121 EAL: Ignore mapping IO port bar(1) 00:03:59.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:59.381 EAL: Ignore mapping IO port bar(1) 00:03:59.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:59.642 EAL: Ignore mapping IO port bar(1) 00:03:59.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:59.903 EAL: Ignore mapping IO port bar(1) 00:03:59.903 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:00.163 EAL: Ignore mapping IO port bar(1) 00:04:00.163 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:00.423 EAL: Ignore mapping IO port bar(1) 00:04:00.423 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:00.423 EAL: Ignore mapping IO port bar(1) 00:04:00.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:00.683 EAL: Ignore mapping IO port bar(1) 00:04:00.943 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:00.943 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:00.943 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:00.943 Starting DPDK initialization... 00:04:00.943 Starting SPDK post initialization... 00:04:00.943 SPDK NVMe probe 00:04:00.943 Attaching to 0000:65:00.0 00:04:00.943 Attached to 0000:65:00.0 00:04:00.943 Cleaning up... 00:04:02.869 00:04:02.869 real 0m5.752s 00:04:02.869 user 0m0.107s 00:04:02.869 sys 0m0.197s 00:04:02.869 18:54:19 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.869 18:54:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.869 ************************************ 00:04:02.869 END TEST env_dpdk_post_init 00:04:02.869 ************************************ 00:04:02.869 18:54:19 env -- env/env.sh@26 -- # uname 00:04:02.869 18:54:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.869 18:54:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.869 18:54:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.869 18:54:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.869 18:54:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.869 ************************************ 00:04:02.869 START TEST env_mem_callbacks 00:04:02.869 ************************************ 00:04:02.869 18:54:19 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.869 EAL: Detected CPU lcores: 128 00:04:02.869 EAL: Detected NUMA nodes: 2 00:04:02.869 EAL: Detected shared linkage of DPDK 00:04:02.869 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.869 EAL: Selected IOVA mode 'VA' 00:04:02.869 EAL: VFIO support initialized 00:04:02.869 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.869 00:04:02.869 00:04:02.869 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.869 http://cunit.sourceforge.net/ 00:04:02.869 00:04:02.869 00:04:02.869 Suite: memory 00:04:02.869 Test: test ... 00:04:02.869 register 0x200000200000 2097152 00:04:02.869 malloc 3145728 00:04:02.869 register 0x200000400000 4194304 00:04:02.869 buf 0x200000500000 len 3145728 PASSED 00:04:02.869 malloc 64 00:04:02.869 buf 0x2000004fff40 len 64 PASSED 00:04:02.869 malloc 4194304 00:04:02.869 register 0x200000800000 6291456 00:04:02.869 buf 0x200000a00000 len 4194304 PASSED 00:04:02.869 free 0x200000500000 3145728 00:04:02.869 free 0x2000004fff40 64 00:04:02.869 unregister 0x200000400000 4194304 PASSED 00:04:02.869 free 0x200000a00000 4194304 00:04:02.869 unregister 0x200000800000 6291456 PASSED 00:04:02.869 malloc 8388608 00:04:02.869 register 0x200000400000 10485760 00:04:02.869 buf 0x200000600000 len 8388608 PASSED 00:04:02.869 free 0x200000600000 8388608 00:04:02.869 unregister 0x200000400000 10485760 PASSED 00:04:02.869 passed 00:04:02.869 00:04:02.869 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.869 suites 1 1 n/a 0 0 00:04:02.869 tests 1 1 1 0 0 00:04:02.869 asserts 15 15 15 0 n/a 00:04:02.869 00:04:02.869 Elapsed time = 0.010 seconds 00:04:02.869 00:04:02.869 real 0m0.070s 00:04:02.869 user 0m0.025s 00:04:02.869 sys 0m0.044s 00:04:02.869 18:54:19 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.869 18:54:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.869 ************************************ 00:04:02.869 END TEST env_mem_callbacks 00:04:02.869 ************************************ 00:04:02.869 00:04:02.869 real 0m7.536s 00:04:02.869 user 0m1.029s 00:04:02.869 sys 0m1.062s 00:04:02.869 18:54:19 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.869 18:54:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.869 ************************************ 00:04:02.869 END TEST env 00:04:02.869 ************************************ 00:04:02.869 18:54:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.869 18:54:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.869 18:54:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.869 18:54:19 -- common/autotest_common.sh@10 -- # set +x 00:04:02.869 ************************************ 00:04:02.869 START TEST rpc 00:04:02.869 ************************************ 00:04:02.869 18:54:20 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.132 * Looking for test storage... 00:04:03.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.132 18:54:20 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.132 18:54:20 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.132 18:54:20 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.132 18:54:20 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.132 18:54:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.132 18:54:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.132 18:54:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.132 18:54:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.132 18:54:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.132 18:54:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.132 18:54:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.132 18:54:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.132 18:54:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.132 18:54:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.132 18:54:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.132 18:54:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.132 18:54:20 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.133 18:54:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.133 18:54:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.133 18:54:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.133 18:54:20 rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.133 18:54:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.133 18:54:20 rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.133 18:54:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.133 18:54:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.133 18:54:20 rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.133 18:54:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.133 18:54:20 rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.133 18:54:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.133 18:54:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.133 18:54:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.133 18:54:20 rpc -- scripts/common.sh@368 -- # return 0 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.133 --rc genhtml_branch_coverage=1 00:04:03.133 --rc genhtml_function_coverage=1 00:04:03.133 --rc genhtml_legend=1 00:04:03.133 --rc geninfo_all_blocks=1 00:04:03.133 --rc geninfo_unexecuted_blocks=1 00:04:03.133 00:04:03.133 ' 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.133 --rc genhtml_branch_coverage=1 00:04:03.133 --rc genhtml_function_coverage=1 00:04:03.133 --rc genhtml_legend=1 00:04:03.133 --rc geninfo_all_blocks=1 00:04:03.133 --rc geninfo_unexecuted_blocks=1 00:04:03.133 00:04:03.133 ' 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.133 --rc genhtml_branch_coverage=1 00:04:03.133 --rc genhtml_function_coverage=1 00:04:03.133 --rc genhtml_legend=1 00:04:03.133 --rc geninfo_all_blocks=1 00:04:03.133 --rc geninfo_unexecuted_blocks=1 00:04:03.133 00:04:03.133 ' 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.133 --rc genhtml_branch_coverage=1 00:04:03.133 --rc genhtml_function_coverage=1 00:04:03.133 --rc genhtml_legend=1 00:04:03.133 --rc geninfo_all_blocks=1 00:04:03.133 --rc geninfo_unexecuted_blocks=1 00:04:03.133 00:04:03.133 ' 00:04:03.133 18:54:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2699375 00:04:03.133 18:54:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.133 18:54:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:03.133 18:54:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2699375 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 2699375 ']' 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.133 18:54:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.133 [2024-11-26 18:54:20.292230] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:03.133 [2024-11-26 18:54:20.292303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699375 ] 00:04:03.396 [2024-11-26 18:54:20.384471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.396 [2024-11-26 18:54:20.436897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:03.396 [2024-11-26 18:54:20.436958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2699375' to capture a snapshot of events at runtime. 00:04:03.396 [2024-11-26 18:54:20.436967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:03.396 [2024-11-26 18:54:20.436974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:03.396 [2024-11-26 18:54:20.436981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2699375 for offline analysis/debug. 00:04:03.396 [2024-11-26 18:54:20.437788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.970 18:54:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.970 18:54:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:03.970 18:54:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.970 18:54:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.970 18:54:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:03.970 18:54:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:03.970 18:54:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.970 18:54:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.970 18:54:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.970 ************************************ 00:04:03.970 START TEST rpc_integrity 00:04:03.970 ************************************ 00:04:03.970 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:03.970 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.970 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.970 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.970 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.970 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.970 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.232 { 00:04:04.232 "name": "Malloc0", 00:04:04.232 "aliases": [ 00:04:04.232 "76dba786-3321-433b-83cf-1f76b6142228" 00:04:04.232 ], 00:04:04.232 "product_name": "Malloc disk", 00:04:04.232 "block_size": 512, 00:04:04.232 "num_blocks": 16384, 00:04:04.232 "uuid": "76dba786-3321-433b-83cf-1f76b6142228", 00:04:04.232 "assigned_rate_limits": { 00:04:04.232 "rw_ios_per_sec": 0, 00:04:04.232 "rw_mbytes_per_sec": 0, 00:04:04.232 "r_mbytes_per_sec": 0, 00:04:04.232 "w_mbytes_per_sec": 0 00:04:04.232 }, 00:04:04.232 "claimed": false, 00:04:04.232 "zoned": false, 00:04:04.232 "supported_io_types": { 00:04:04.232 "read": true, 00:04:04.232 "write": true, 00:04:04.232 "unmap": true, 00:04:04.232 "flush": true, 00:04:04.232 "reset": true, 00:04:04.232 "nvme_admin": false, 00:04:04.232 "nvme_io": false, 00:04:04.232 "nvme_io_md": false, 00:04:04.232 "write_zeroes": true, 00:04:04.232 "zcopy": true, 00:04:04.232 "get_zone_info": false, 00:04:04.232 "zone_management": false, 00:04:04.232 "zone_append": false, 00:04:04.232 "compare": false, 00:04:04.232 "compare_and_write": false, 00:04:04.232 "abort": true, 00:04:04.232 "seek_hole": false, 00:04:04.232 "seek_data": false, 00:04:04.232 "copy": true, 00:04:04.232 "nvme_iov_md": false 00:04:04.232 }, 00:04:04.232 "memory_domains": [ 00:04:04.232 { 00:04:04.232 "dma_device_id": "system", 00:04:04.232 "dma_device_type": 1 00:04:04.232 }, 00:04:04.232 { 00:04:04.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.232 "dma_device_type": 2 00:04:04.232 } 00:04:04.232 ], 00:04:04.232 "driver_specific": {} 00:04:04.232 } 00:04:04.232 ]' 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.232 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:04.232 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.233 [2024-11-26 18:54:21.289996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:04.233 [2024-11-26 18:54:21.290049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.233 [2024-11-26 18:54:21.290066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x107a800 00:04:04.233 [2024-11-26 18:54:21.290074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.233 [2024-11-26 18:54:21.291636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.233 [2024-11-26 18:54:21.291673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.233 Passthru0 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.233 { 00:04:04.233 "name": "Malloc0", 00:04:04.233 "aliases": [ 00:04:04.233 "76dba786-3321-433b-83cf-1f76b6142228" 00:04:04.233 ], 00:04:04.233 "product_name": "Malloc disk", 00:04:04.233 "block_size": 512, 00:04:04.233 "num_blocks": 16384, 00:04:04.233 "uuid": "76dba786-3321-433b-83cf-1f76b6142228", 00:04:04.233 "assigned_rate_limits": { 00:04:04.233 "rw_ios_per_sec": 0, 00:04:04.233 "rw_mbytes_per_sec": 0, 00:04:04.233 "r_mbytes_per_sec": 0, 00:04:04.233 "w_mbytes_per_sec": 0 00:04:04.233 }, 00:04:04.233 "claimed": true, 00:04:04.233 "claim_type": "exclusive_write", 00:04:04.233 "zoned": false, 00:04:04.233 "supported_io_types": { 00:04:04.233 "read": true, 00:04:04.233 "write": true, 00:04:04.233 "unmap": true, 00:04:04.233 "flush": true, 00:04:04.233 "reset": true, 00:04:04.233 "nvme_admin": false, 00:04:04.233 "nvme_io": false, 00:04:04.233 "nvme_io_md": false, 00:04:04.233 "write_zeroes": true, 00:04:04.233 "zcopy": true, 00:04:04.233 "get_zone_info": false, 00:04:04.233 "zone_management": false, 00:04:04.233 "zone_append": false, 00:04:04.233 "compare": false, 00:04:04.233 "compare_and_write": false, 00:04:04.233 "abort": true, 00:04:04.233 "seek_hole": false, 00:04:04.233 "seek_data": false, 00:04:04.233 "copy": true, 00:04:04.233 "nvme_iov_md": false 00:04:04.233 }, 00:04:04.233 "memory_domains": [ 00:04:04.233 { 00:04:04.233 "dma_device_id": "system", 00:04:04.233 "dma_device_type": 1 00:04:04.233 }, 00:04:04.233 { 00:04:04.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.233 "dma_device_type": 2 00:04:04.233 } 00:04:04.233 ], 00:04:04.233 "driver_specific": {} 00:04:04.233 }, 00:04:04.233 { 00:04:04.233 "name": "Passthru0", 00:04:04.233 "aliases": [ 00:04:04.233 "9a2d52fb-45c5-55cd-bc2e-100c7efd0eb7" 00:04:04.233 ], 00:04:04.233 "product_name": "passthru", 00:04:04.233 "block_size": 512, 00:04:04.233 "num_blocks": 16384, 00:04:04.233 "uuid": "9a2d52fb-45c5-55cd-bc2e-100c7efd0eb7", 00:04:04.233 "assigned_rate_limits": { 00:04:04.233 "rw_ios_per_sec": 0, 00:04:04.233 "rw_mbytes_per_sec": 0, 00:04:04.233 "r_mbytes_per_sec": 0, 00:04:04.233 "w_mbytes_per_sec": 0 00:04:04.233 }, 00:04:04.233 "claimed": false, 00:04:04.233 "zoned": false, 00:04:04.233 "supported_io_types": { 00:04:04.233 "read": true, 00:04:04.233 "write": true, 00:04:04.233 "unmap": true, 00:04:04.233 "flush": true, 00:04:04.233 "reset": true, 00:04:04.233 "nvme_admin": false, 00:04:04.233 "nvme_io": false, 00:04:04.233 "nvme_io_md": false, 00:04:04.233 "write_zeroes": true, 00:04:04.233 "zcopy": true, 00:04:04.233 "get_zone_info": false, 00:04:04.233 "zone_management": false, 00:04:04.233 "zone_append": false, 00:04:04.233 "compare": false, 00:04:04.233 "compare_and_write": false, 00:04:04.233 "abort": true, 00:04:04.233 "seek_hole": false, 00:04:04.233 "seek_data": false, 00:04:04.233 "copy": true, 00:04:04.233 "nvme_iov_md": false 00:04:04.233 }, 00:04:04.233 "memory_domains": [ 00:04:04.233 { 00:04:04.233 "dma_device_id": "system", 00:04:04.233 "dma_device_type": 1 00:04:04.233 }, 00:04:04.233 { 00:04:04.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.233 "dma_device_type": 2 00:04:04.233 } 00:04:04.233 ], 00:04:04.233 "driver_specific": { 00:04:04.233 "passthru": { 00:04:04.233 "name": "Passthru0", 00:04:04.233 "base_bdev_name": "Malloc0" 00:04:04.233 } 00:04:04.233 } 00:04:04.233 } 00:04:04.233 ]' 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.233 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.233 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.496 18:54:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.496 00:04:04.496 real 0m0.305s 00:04:04.496 user 0m0.194s 00:04:04.496 sys 0m0.042s 00:04:04.496 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 ************************************ 00:04:04.496 END TEST rpc_integrity 00:04:04.496 ************************************ 00:04:04.496 18:54:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:04.496 18:54:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.496 18:54:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.496 18:54:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 ************************************ 00:04:04.496 START TEST rpc_plugins 00:04:04.496 ************************************ 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:04.496 { 00:04:04.496 "name": "Malloc1", 00:04:04.496 "aliases": [ 00:04:04.496 "fd8e5b6e-3b1e-4aee-82be-092f90022f00" 00:04:04.496 ], 00:04:04.496 "product_name": "Malloc disk", 00:04:04.496 "block_size": 4096, 00:04:04.496 "num_blocks": 256, 00:04:04.496 "uuid": "fd8e5b6e-3b1e-4aee-82be-092f90022f00", 00:04:04.496 "assigned_rate_limits": { 00:04:04.496 "rw_ios_per_sec": 0, 00:04:04.496 "rw_mbytes_per_sec": 0, 00:04:04.496 "r_mbytes_per_sec": 0, 00:04:04.496 "w_mbytes_per_sec": 0 00:04:04.496 }, 00:04:04.496 "claimed": false, 00:04:04.496 "zoned": false, 00:04:04.496 "supported_io_types": { 00:04:04.496 "read": true, 00:04:04.496 "write": true, 00:04:04.496 "unmap": true, 00:04:04.496 "flush": true, 00:04:04.496 "reset": true, 00:04:04.496 "nvme_admin": false, 00:04:04.496 "nvme_io": false, 00:04:04.496 "nvme_io_md": false, 00:04:04.496 "write_zeroes": true, 00:04:04.496 "zcopy": true, 00:04:04.496 "get_zone_info": false, 00:04:04.496 "zone_management": false, 00:04:04.496 "zone_append": false, 00:04:04.496 "compare": false, 00:04:04.496 "compare_and_write": false, 00:04:04.496 "abort": true, 00:04:04.496 "seek_hole": false, 00:04:04.496 "seek_data": false, 00:04:04.496 "copy": true, 00:04:04.496 "nvme_iov_md": false 00:04:04.496 }, 00:04:04.496 "memory_domains": [ 00:04:04.496 { 00:04:04.496 "dma_device_id": "system", 00:04:04.496 "dma_device_type": 1 00:04:04.496 }, 00:04:04.496 { 00:04:04.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.496 "dma_device_type": 2 00:04:04.496 } 00:04:04.496 ], 00:04:04.496 "driver_specific": {} 00:04:04.496 } 00:04:04.496 ]' 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:04.496 18:54:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:04.496 00:04:04.496 real 0m0.155s 00:04:04.496 user 0m0.096s 00:04:04.496 sys 0m0.022s 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.496 18:54:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:04.496 ************************************ 00:04:04.496 END TEST rpc_plugins 00:04:04.496 ************************************ 00:04:04.758 18:54:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:04.758 18:54:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.758 18:54:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.758 18:54:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.758 ************************************ 00:04:04.758 START TEST rpc_trace_cmd_test 00:04:04.758 ************************************ 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:04.758 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2699375", 00:04:04.758 "tpoint_group_mask": "0x8", 00:04:04.758 "iscsi_conn": { 00:04:04.758 "mask": "0x2", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "scsi": { 00:04:04.758 "mask": "0x4", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "bdev": { 00:04:04.758 "mask": "0x8", 00:04:04.758 "tpoint_mask": "0xffffffffffffffff" 00:04:04.758 }, 00:04:04.758 "nvmf_rdma": { 00:04:04.758 "mask": "0x10", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "nvmf_tcp": { 00:04:04.758 "mask": "0x20", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "ftl": { 00:04:04.758 "mask": "0x40", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "blobfs": { 00:04:04.758 "mask": "0x80", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "dsa": { 00:04:04.758 "mask": "0x200", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "thread": { 00:04:04.758 "mask": "0x400", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "nvme_pcie": { 00:04:04.758 "mask": "0x800", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "iaa": { 00:04:04.758 "mask": "0x1000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "nvme_tcp": { 00:04:04.758 "mask": "0x2000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "bdev_nvme": { 00:04:04.758 "mask": "0x4000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "sock": { 00:04:04.758 "mask": "0x8000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "blob": { 00:04:04.758 "mask": "0x10000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "bdev_raid": { 00:04:04.758 "mask": "0x20000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 }, 00:04:04.758 "scheduler": { 00:04:04.758 "mask": "0x40000", 00:04:04.758 "tpoint_mask": "0x0" 00:04:04.758 } 00:04:04.758 }' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:04.758 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.020 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.020 18:54:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.020 18:54:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.020 00:04:05.020 real 0m0.254s 00:04:05.020 user 0m0.215s 00:04:05.020 sys 0m0.031s 00:04:05.020 18:54:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.020 18:54:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.020 ************************************ 00:04:05.020 END TEST rpc_trace_cmd_test 00:04:05.020 ************************************ 00:04:05.020 18:54:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.020 18:54:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.020 18:54:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.020 18:54:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.020 18:54:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.020 18:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.020 ************************************ 00:04:05.020 START TEST rpc_daemon_integrity 00:04:05.020 ************************************ 00:04:05.020 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.020 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.020 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.020 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.020 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.021 { 00:04:05.021 "name": "Malloc2", 00:04:05.021 "aliases": [ 00:04:05.021 "c7a35a96-3b54-459f-9ff3-2994e1af8cea" 00:04:05.021 ], 00:04:05.021 "product_name": "Malloc disk", 00:04:05.021 "block_size": 512, 00:04:05.021 "num_blocks": 16384, 00:04:05.021 "uuid": "c7a35a96-3b54-459f-9ff3-2994e1af8cea", 00:04:05.021 "assigned_rate_limits": { 00:04:05.021 "rw_ios_per_sec": 0, 00:04:05.021 "rw_mbytes_per_sec": 0, 00:04:05.021 "r_mbytes_per_sec": 0, 00:04:05.021 "w_mbytes_per_sec": 0 00:04:05.021 }, 00:04:05.021 "claimed": false, 00:04:05.021 "zoned": false, 00:04:05.021 "supported_io_types": { 00:04:05.021 "read": true, 00:04:05.021 "write": true, 00:04:05.021 "unmap": true, 00:04:05.021 "flush": true, 00:04:05.021 "reset": true, 00:04:05.021 "nvme_admin": false, 00:04:05.021 "nvme_io": false, 00:04:05.021 "nvme_io_md": false, 00:04:05.021 "write_zeroes": true, 00:04:05.021 "zcopy": true, 00:04:05.021 "get_zone_info": false, 00:04:05.021 "zone_management": false, 00:04:05.021 "zone_append": false, 00:04:05.021 "compare": false, 00:04:05.021 "compare_and_write": false, 00:04:05.021 "abort": true, 00:04:05.021 "seek_hole": false, 00:04:05.021 "seek_data": false, 00:04:05.021 "copy": true, 00:04:05.021 "nvme_iov_md": false 00:04:05.021 }, 00:04:05.021 "memory_domains": [ 00:04:05.021 { 00:04:05.021 "dma_device_id": "system", 00:04:05.021 "dma_device_type": 1 00:04:05.021 }, 00:04:05.021 { 00:04:05.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.021 "dma_device_type": 2 00:04:05.021 } 00:04:05.021 ], 00:04:05.021 "driver_specific": {} 00:04:05.021 } 00:04:05.021 ]' 00:04:05.021 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 [2024-11-26 18:54:22.252621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:05.283 [2024-11-26 18:54:22.252661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.283 [2024-11-26 18:54:22.252677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf36fe0 00:04:05.283 [2024-11-26 18:54:22.252684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.283 [2024-11-26 18:54:22.254118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.283 [2024-11-26 18:54:22.254153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.283 Passthru0 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.283 { 00:04:05.283 "name": "Malloc2", 00:04:05.283 "aliases": [ 00:04:05.283 "c7a35a96-3b54-459f-9ff3-2994e1af8cea" 00:04:05.283 ], 00:04:05.283 "product_name": "Malloc disk", 00:04:05.283 "block_size": 512, 00:04:05.283 "num_blocks": 16384, 00:04:05.283 "uuid": "c7a35a96-3b54-459f-9ff3-2994e1af8cea", 00:04:05.283 "assigned_rate_limits": { 00:04:05.283 "rw_ios_per_sec": 0, 00:04:05.283 "rw_mbytes_per_sec": 0, 00:04:05.283 "r_mbytes_per_sec": 0, 00:04:05.283 "w_mbytes_per_sec": 0 00:04:05.283 }, 00:04:05.283 "claimed": true, 00:04:05.283 "claim_type": "exclusive_write", 00:04:05.283 "zoned": false, 00:04:05.283 "supported_io_types": { 00:04:05.283 "read": true, 00:04:05.283 "write": true, 00:04:05.283 "unmap": true, 00:04:05.283 "flush": true, 00:04:05.283 "reset": true, 00:04:05.283 "nvme_admin": false, 00:04:05.283 "nvme_io": false, 00:04:05.283 "nvme_io_md": false, 00:04:05.283 "write_zeroes": true, 00:04:05.283 "zcopy": true, 00:04:05.283 "get_zone_info": false, 00:04:05.283 "zone_management": false, 00:04:05.283 "zone_append": false, 00:04:05.283 "compare": false, 00:04:05.283 "compare_and_write": false, 00:04:05.283 "abort": true, 00:04:05.283 "seek_hole": false, 00:04:05.283 "seek_data": false, 00:04:05.283 "copy": true, 00:04:05.283 "nvme_iov_md": false 00:04:05.283 }, 00:04:05.283 "memory_domains": [ 00:04:05.283 { 00:04:05.283 "dma_device_id": "system", 00:04:05.283 "dma_device_type": 1 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.283 "dma_device_type": 2 00:04:05.283 } 00:04:05.283 ], 00:04:05.283 "driver_specific": {} 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "name": "Passthru0", 00:04:05.283 "aliases": [ 00:04:05.283 "60196b09-a345-55df-a6c1-ca1f43867728" 00:04:05.283 ], 00:04:05.283 "product_name": "passthru", 00:04:05.283 "block_size": 512, 00:04:05.283 "num_blocks": 16384, 00:04:05.283 "uuid": "60196b09-a345-55df-a6c1-ca1f43867728", 00:04:05.283 "assigned_rate_limits": { 00:04:05.283 "rw_ios_per_sec": 0, 00:04:05.283 "rw_mbytes_per_sec": 0, 00:04:05.283 "r_mbytes_per_sec": 0, 00:04:05.283 "w_mbytes_per_sec": 0 00:04:05.283 }, 00:04:05.283 "claimed": false, 00:04:05.283 "zoned": false, 00:04:05.283 "supported_io_types": { 00:04:05.283 "read": true, 00:04:05.283 "write": true, 00:04:05.283 "unmap": true, 00:04:05.283 "flush": true, 00:04:05.283 "reset": true, 00:04:05.283 "nvme_admin": false, 00:04:05.283 "nvme_io": false, 00:04:05.283 "nvme_io_md": false, 00:04:05.283 "write_zeroes": true, 00:04:05.283 "zcopy": true, 00:04:05.283 "get_zone_info": false, 00:04:05.283 "zone_management": false, 00:04:05.283 "zone_append": false, 00:04:05.283 "compare": false, 00:04:05.283 "compare_and_write": false, 00:04:05.283 "abort": true, 00:04:05.283 "seek_hole": false, 00:04:05.283 "seek_data": false, 00:04:05.283 "copy": true, 00:04:05.283 "nvme_iov_md": false 00:04:05.283 }, 00:04:05.283 "memory_domains": [ 00:04:05.283 { 00:04:05.283 "dma_device_id": "system", 00:04:05.283 "dma_device_type": 1 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.283 "dma_device_type": 2 00:04:05.283 } 00:04:05.283 ], 00:04:05.283 "driver_specific": { 00:04:05.283 "passthru": { 00:04:05.283 "name": "Passthru0", 00:04:05.283 "base_bdev_name": "Malloc2" 00:04:05.283 } 00:04:05.283 } 00:04:05.283 } 00:04:05.283 ]' 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.283 18:54:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.283 00:04:05.283 real 0m0.311s 00:04:05.283 user 0m0.183s 00:04:05.283 sys 0m0.060s 00:04:05.284 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.284 18:54:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.284 ************************************ 00:04:05.284 END TEST rpc_daemon_integrity 00:04:05.284 ************************************ 00:04:05.284 18:54:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:05.284 18:54:22 rpc -- rpc/rpc.sh@84 -- # killprocess 2699375 00:04:05.284 18:54:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 2699375 ']' 00:04:05.284 18:54:22 rpc -- common/autotest_common.sh@958 -- # kill -0 2699375 00:04:05.284 18:54:22 rpc -- common/autotest_common.sh@959 -- # uname 00:04:05.284 18:54:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.284 18:54:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2699375 00:04:05.545 18:54:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:05.545 18:54:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:05.545 18:54:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2699375' 00:04:05.545 killing process with pid 2699375 00:04:05.545 18:54:22 rpc -- common/autotest_common.sh@973 -- # kill 2699375 00:04:05.545 18:54:22 rpc -- common/autotest_common.sh@978 -- # wait 2699375 00:04:05.806 00:04:05.806 real 0m2.736s 00:04:05.806 user 0m3.489s 00:04:05.806 sys 0m0.841s 00:04:05.806 18:54:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.806 18:54:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.806 ************************************ 00:04:05.806 END TEST rpc 00:04:05.806 ************************************ 00:04:05.806 18:54:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:05.806 18:54:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.806 18:54:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.806 18:54:22 -- common/autotest_common.sh@10 -- # set +x 00:04:05.806 ************************************ 00:04:05.806 START TEST skip_rpc 00:04:05.806 ************************************ 00:04:05.806 18:54:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:05.806 * Looking for test storage... 00:04:05.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.806 18:54:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.806 18:54:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.806 18:54:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.068 18:54:23 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.068 18:54:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.069 18:54:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.069 18:54:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.069 18:54:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.069 --rc genhtml_branch_coverage=1 00:04:06.069 --rc genhtml_function_coverage=1 00:04:06.069 --rc genhtml_legend=1 00:04:06.069 --rc geninfo_all_blocks=1 00:04:06.069 --rc geninfo_unexecuted_blocks=1 00:04:06.069 00:04:06.069 ' 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.069 --rc genhtml_branch_coverage=1 00:04:06.069 --rc genhtml_function_coverage=1 00:04:06.069 --rc genhtml_legend=1 00:04:06.069 --rc geninfo_all_blocks=1 00:04:06.069 --rc geninfo_unexecuted_blocks=1 00:04:06.069 00:04:06.069 ' 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.069 --rc genhtml_branch_coverage=1 00:04:06.069 --rc genhtml_function_coverage=1 00:04:06.069 --rc genhtml_legend=1 00:04:06.069 --rc geninfo_all_blocks=1 00:04:06.069 --rc geninfo_unexecuted_blocks=1 00:04:06.069 00:04:06.069 ' 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.069 --rc genhtml_branch_coverage=1 00:04:06.069 --rc genhtml_function_coverage=1 00:04:06.069 --rc genhtml_legend=1 00:04:06.069 --rc geninfo_all_blocks=1 00:04:06.069 --rc geninfo_unexecuted_blocks=1 00:04:06.069 00:04:06.069 ' 00:04:06.069 18:54:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.069 18:54:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.069 18:54:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.069 18:54:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.069 ************************************ 00:04:06.069 START TEST skip_rpc 00:04:06.069 ************************************ 00:04:06.069 18:54:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:06.069 18:54:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2700226 00:04:06.069 18:54:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.069 18:54:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.069 18:54:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.069 [2024-11-26 18:54:23.148401] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:06.069 [2024-11-26 18:54:23.148464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2700226 ] 00:04:06.069 [2024-11-26 18:54:23.242678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.331 [2024-11-26 18:54:23.295828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2700226 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2700226 ']' 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2700226 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2700226 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2700226' 00:04:11.623 killing process with pid 2700226 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2700226 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2700226 00:04:11.623 00:04:11.623 real 0m5.266s 00:04:11.623 user 0m5.011s 00:04:11.623 sys 0m0.304s 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.623 18:54:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.623 ************************************ 00:04:11.623 END TEST skip_rpc 00:04:11.623 ************************************ 00:04:11.623 18:54:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.623 18:54:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.623 18:54:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.623 18:54:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.623 ************************************ 00:04:11.623 START TEST skip_rpc_with_json 00:04:11.623 ************************************ 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2701276 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2701276 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2701276 ']' 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.623 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.623 [2024-11-26 18:54:28.494025] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:11.623 [2024-11-26 18:54:28.494085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701276 ] 00:04:11.623 [2024-11-26 18:54:28.580755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.623 [2024-11-26 18:54:28.615796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.193 [2024-11-26 18:54:29.280027] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:12.193 request: 00:04:12.193 { 00:04:12.193 "trtype": "tcp", 00:04:12.193 "method": "nvmf_get_transports", 00:04:12.193 "req_id": 1 00:04:12.193 } 00:04:12.193 Got JSON-RPC error response 00:04:12.193 response: 00:04:12.193 { 00:04:12.193 "code": -19, 00:04:12.193 "message": "No such device" 00:04:12.193 } 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.193 [2024-11-26 18:54:29.292122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.193 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.454 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.454 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.454 { 00:04:12.454 "subsystems": [ 00:04:12.454 { 00:04:12.454 "subsystem": "fsdev", 00:04:12.454 "config": [ 00:04:12.454 { 00:04:12.454 "method": "fsdev_set_opts", 00:04:12.454 "params": { 00:04:12.454 "fsdev_io_pool_size": 65535, 00:04:12.454 "fsdev_io_cache_size": 256 00:04:12.454 } 00:04:12.454 } 00:04:12.454 ] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "vfio_user_target", 00:04:12.454 "config": null 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "keyring", 00:04:12.454 "config": [] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "iobuf", 00:04:12.454 "config": [ 00:04:12.454 { 00:04:12.454 "method": "iobuf_set_options", 00:04:12.454 "params": { 00:04:12.454 "small_pool_count": 8192, 00:04:12.454 "large_pool_count": 1024, 00:04:12.454 "small_bufsize": 8192, 00:04:12.454 "large_bufsize": 135168, 00:04:12.454 "enable_numa": false 00:04:12.454 } 00:04:12.454 } 00:04:12.454 ] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "sock", 00:04:12.454 "config": [ 00:04:12.454 { 00:04:12.454 "method": "sock_set_default_impl", 00:04:12.454 "params": { 00:04:12.454 "impl_name": "posix" 00:04:12.454 } 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "method": "sock_impl_set_options", 00:04:12.454 "params": { 00:04:12.454 "impl_name": "ssl", 00:04:12.454 "recv_buf_size": 4096, 00:04:12.454 "send_buf_size": 4096, 00:04:12.454 "enable_recv_pipe": true, 00:04:12.454 "enable_quickack": false, 00:04:12.454 "enable_placement_id": 0, 00:04:12.454 "enable_zerocopy_send_server": true, 00:04:12.454 "enable_zerocopy_send_client": false, 00:04:12.454 "zerocopy_threshold": 0, 00:04:12.454 "tls_version": 0, 00:04:12.454 "enable_ktls": false 00:04:12.454 } 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "method": "sock_impl_set_options", 00:04:12.454 "params": { 00:04:12.454 "impl_name": "posix", 00:04:12.454 "recv_buf_size": 2097152, 00:04:12.454 "send_buf_size": 2097152, 00:04:12.454 "enable_recv_pipe": true, 00:04:12.454 "enable_quickack": false, 00:04:12.454 "enable_placement_id": 0, 00:04:12.454 "enable_zerocopy_send_server": true, 00:04:12.454 "enable_zerocopy_send_client": false, 00:04:12.454 "zerocopy_threshold": 0, 00:04:12.454 "tls_version": 0, 00:04:12.454 "enable_ktls": false 00:04:12.454 } 00:04:12.454 } 00:04:12.454 ] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "vmd", 00:04:12.454 "config": [] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "accel", 00:04:12.454 "config": [ 00:04:12.454 { 00:04:12.454 "method": "accel_set_options", 00:04:12.454 "params": { 00:04:12.454 "small_cache_size": 128, 00:04:12.454 "large_cache_size": 16, 00:04:12.454 "task_count": 2048, 00:04:12.454 "sequence_count": 2048, 00:04:12.454 "buf_count": 2048 00:04:12.454 } 00:04:12.454 } 00:04:12.454 ] 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "subsystem": "bdev", 00:04:12.454 "config": [ 00:04:12.454 { 00:04:12.454 "method": "bdev_set_options", 00:04:12.454 "params": { 00:04:12.454 "bdev_io_pool_size": 65535, 00:04:12.454 "bdev_io_cache_size": 256, 00:04:12.454 "bdev_auto_examine": true, 00:04:12.454 "iobuf_small_cache_size": 128, 00:04:12.454 "iobuf_large_cache_size": 16 00:04:12.454 } 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "method": "bdev_raid_set_options", 00:04:12.454 "params": { 00:04:12.454 "process_window_size_kb": 1024, 00:04:12.454 "process_max_bandwidth_mb_sec": 0 00:04:12.454 } 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "method": "bdev_iscsi_set_options", 00:04:12.454 "params": { 00:04:12.454 "timeout_sec": 30 00:04:12.454 } 00:04:12.454 }, 00:04:12.454 { 00:04:12.454 "method": "bdev_nvme_set_options", 00:04:12.454 "params": { 00:04:12.454 "action_on_timeout": "none", 00:04:12.454 "timeout_us": 0, 00:04:12.454 "timeout_admin_us": 0, 00:04:12.454 "keep_alive_timeout_ms": 10000, 00:04:12.454 "arbitration_burst": 0, 00:04:12.454 "low_priority_weight": 0, 00:04:12.454 "medium_priority_weight": 0, 00:04:12.454 "high_priority_weight": 0, 00:04:12.454 "nvme_adminq_poll_period_us": 10000, 00:04:12.454 "nvme_ioq_poll_period_us": 0, 00:04:12.454 "io_queue_requests": 0, 00:04:12.454 "delay_cmd_submit": true, 00:04:12.454 "transport_retry_count": 4, 00:04:12.454 "bdev_retry_count": 3, 00:04:12.454 "transport_ack_timeout": 0, 00:04:12.454 "ctrlr_loss_timeout_sec": 0, 00:04:12.454 "reconnect_delay_sec": 0, 00:04:12.454 "fast_io_fail_timeout_sec": 0, 00:04:12.454 "disable_auto_failback": false, 00:04:12.454 "generate_uuids": false, 00:04:12.454 "transport_tos": 0, 00:04:12.454 "nvme_error_stat": false, 00:04:12.454 "rdma_srq_size": 0, 00:04:12.454 "io_path_stat": false, 00:04:12.454 "allow_accel_sequence": false, 00:04:12.454 "rdma_max_cq_size": 0, 00:04:12.454 "rdma_cm_event_timeout_ms": 0, 00:04:12.454 "dhchap_digests": [ 00:04:12.454 "sha256", 00:04:12.454 "sha384", 00:04:12.454 "sha512" 00:04:12.454 ], 00:04:12.454 "dhchap_dhgroups": [ 00:04:12.454 "null", 00:04:12.454 "ffdhe2048", 00:04:12.454 "ffdhe3072", 00:04:12.454 "ffdhe4096", 00:04:12.454 "ffdhe6144", 00:04:12.455 "ffdhe8192" 00:04:12.455 ] 00:04:12.455 } 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "method": "bdev_nvme_set_hotplug", 00:04:12.455 "params": { 00:04:12.455 "period_us": 100000, 00:04:12.455 "enable": false 00:04:12.455 } 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "method": "bdev_wait_for_examine" 00:04:12.455 } 00:04:12.455 ] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "scsi", 00:04:12.455 "config": null 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "scheduler", 00:04:12.455 "config": [ 00:04:12.455 { 00:04:12.455 "method": "framework_set_scheduler", 00:04:12.455 "params": { 00:04:12.455 "name": "static" 00:04:12.455 } 00:04:12.455 } 00:04:12.455 ] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "vhost_scsi", 00:04:12.455 "config": [] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "vhost_blk", 00:04:12.455 "config": [] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "ublk", 00:04:12.455 "config": [] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "nbd", 00:04:12.455 "config": [] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "nvmf", 00:04:12.455 "config": [ 00:04:12.455 { 00:04:12.455 "method": "nvmf_set_config", 00:04:12.455 "params": { 00:04:12.455 "discovery_filter": "match_any", 00:04:12.455 "admin_cmd_passthru": { 00:04:12.455 "identify_ctrlr": false 00:04:12.455 }, 00:04:12.455 "dhchap_digests": [ 00:04:12.455 "sha256", 00:04:12.455 "sha384", 00:04:12.455 "sha512" 00:04:12.455 ], 00:04:12.455 "dhchap_dhgroups": [ 00:04:12.455 "null", 00:04:12.455 "ffdhe2048", 00:04:12.455 "ffdhe3072", 00:04:12.455 "ffdhe4096", 00:04:12.455 "ffdhe6144", 00:04:12.455 "ffdhe8192" 00:04:12.455 ] 00:04:12.455 } 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "method": "nvmf_set_max_subsystems", 00:04:12.455 "params": { 00:04:12.455 "max_subsystems": 1024 00:04:12.455 } 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "method": "nvmf_set_crdt", 00:04:12.455 "params": { 00:04:12.455 "crdt1": 0, 00:04:12.455 "crdt2": 0, 00:04:12.455 "crdt3": 0 00:04:12.455 } 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "method": "nvmf_create_transport", 00:04:12.455 "params": { 00:04:12.455 "trtype": "TCP", 00:04:12.455 "max_queue_depth": 128, 00:04:12.455 "max_io_qpairs_per_ctrlr": 127, 00:04:12.455 "in_capsule_data_size": 4096, 00:04:12.455 "max_io_size": 131072, 00:04:12.455 "io_unit_size": 131072, 00:04:12.455 "max_aq_depth": 128, 00:04:12.455 "num_shared_buffers": 511, 00:04:12.455 "buf_cache_size": 4294967295, 00:04:12.455 "dif_insert_or_strip": false, 00:04:12.455 "zcopy": false, 00:04:12.455 "c2h_success": true, 00:04:12.455 "sock_priority": 0, 00:04:12.455 "abort_timeout_sec": 1, 00:04:12.455 "ack_timeout": 0, 00:04:12.455 "data_wr_pool_size": 0 00:04:12.455 } 00:04:12.455 } 00:04:12.455 ] 00:04:12.455 }, 00:04:12.455 { 00:04:12.455 "subsystem": "iscsi", 00:04:12.455 "config": [ 00:04:12.455 { 00:04:12.455 "method": "iscsi_set_options", 00:04:12.455 "params": { 00:04:12.455 "node_base": "iqn.2016-06.io.spdk", 00:04:12.455 "max_sessions": 128, 00:04:12.455 "max_connections_per_session": 2, 00:04:12.455 "max_queue_depth": 64, 00:04:12.455 "default_time2wait": 2, 00:04:12.455 "default_time2retain": 20, 00:04:12.455 "first_burst_length": 8192, 00:04:12.455 "immediate_data": true, 00:04:12.455 "allow_duplicated_isid": false, 00:04:12.455 "error_recovery_level": 0, 00:04:12.455 "nop_timeout": 60, 00:04:12.455 "nop_in_interval": 30, 00:04:12.455 "disable_chap": false, 00:04:12.455 "require_chap": false, 00:04:12.455 "mutual_chap": false, 00:04:12.455 "chap_group": 0, 00:04:12.455 "max_large_datain_per_connection": 64, 00:04:12.455 "max_r2t_per_connection": 4, 00:04:12.455 "pdu_pool_size": 36864, 00:04:12.455 "immediate_data_pool_size": 16384, 00:04:12.455 "data_out_pool_size": 2048 00:04:12.455 } 00:04:12.455 } 00:04:12.455 ] 00:04:12.455 } 00:04:12.455 ] 00:04:12.455 } 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2701276 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2701276 ']' 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2701276 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701276 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701276' 00:04:12.455 killing process with pid 2701276 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2701276 00:04:12.455 18:54:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2701276 00:04:12.716 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2701616 00:04:12.716 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:12.716 18:54:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.003 18:54:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2701616 00:04:18.003 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2701616 ']' 00:04:18.003 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2701616 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701616 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701616' 00:04:18.004 killing process with pid 2701616 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2701616 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2701616 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.004 00:04:18.004 real 0m6.551s 00:04:18.004 user 0m6.431s 00:04:18.004 sys 0m0.587s 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.004 18:54:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.004 ************************************ 00:04:18.004 END TEST skip_rpc_with_json 00:04:18.004 ************************************ 00:04:18.004 18:54:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.004 ************************************ 00:04:18.004 START TEST skip_rpc_with_delay 00:04:18.004 ************************************ 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.004 [2024-11-26 18:54:35.128806] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.004 00:04:18.004 real 0m0.081s 00:04:18.004 user 0m0.050s 00:04:18.004 sys 0m0.030s 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.004 18:54:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:18.004 ************************************ 00:04:18.004 END TEST skip_rpc_with_delay 00:04:18.004 ************************************ 00:04:18.004 18:54:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:18.004 18:54:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:18.004 18:54:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.004 18:54:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.264 ************************************ 00:04:18.264 START TEST exit_on_failed_rpc_init 00:04:18.264 ************************************ 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2702688 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2702688 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2702688 ']' 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.264 18:54:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.264 [2024-11-26 18:54:35.291349] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:18.264 [2024-11-26 18:54:35.291410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2702688 ] 00:04:18.264 [2024-11-26 18:54:35.379566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.264 [2024-11-26 18:54:35.415023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:19.207 [2024-11-26 18:54:36.141444] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:19.207 [2024-11-26 18:54:36.141496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2702892 ] 00:04:19.207 [2024-11-26 18:54:36.228608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.207 [2024-11-26 18:54:36.264455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.207 [2024-11-26 18:54:36.264504] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:19.207 [2024-11-26 18:54:36.264514] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:19.207 [2024-11-26 18:54:36.264522] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2702688 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2702688 ']' 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2702688 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2702688 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2702688' 00:04:19.207 killing process with pid 2702688 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2702688 00:04:19.207 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2702688 00:04:19.467 00:04:19.467 real 0m1.325s 00:04:19.467 user 0m1.558s 00:04:19.467 sys 0m0.380s 00:04:19.467 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.467 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.467 ************************************ 00:04:19.467 END TEST exit_on_failed_rpc_init 00:04:19.467 ************************************ 00:04:19.467 18:54:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.467 00:04:19.467 real 0m13.753s 00:04:19.467 user 0m13.278s 00:04:19.467 sys 0m1.633s 00:04:19.467 18:54:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.468 18:54:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.468 ************************************ 00:04:19.468 END TEST skip_rpc 00:04:19.468 ************************************ 00:04:19.468 18:54:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.468 18:54:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.468 18:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.468 18:54:36 -- common/autotest_common.sh@10 -- # set +x 00:04:19.468 ************************************ 00:04:19.468 START TEST rpc_client 00:04:19.468 ************************************ 00:04:19.468 18:54:36 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.728 * Looking for test storage... 00:04:19.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:19.728 18:54:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.728 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.728 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.728 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.728 18:54:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.729 18:54:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.729 --rc genhtml_branch_coverage=1 00:04:19.729 --rc genhtml_function_coverage=1 00:04:19.729 --rc genhtml_legend=1 00:04:19.729 --rc geninfo_all_blocks=1 00:04:19.729 --rc geninfo_unexecuted_blocks=1 00:04:19.729 00:04:19.729 ' 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.729 --rc genhtml_branch_coverage=1 00:04:19.729 --rc genhtml_function_coverage=1 00:04:19.729 --rc genhtml_legend=1 00:04:19.729 --rc geninfo_all_blocks=1 00:04:19.729 --rc geninfo_unexecuted_blocks=1 00:04:19.729 00:04:19.729 ' 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.729 --rc genhtml_branch_coverage=1 00:04:19.729 --rc genhtml_function_coverage=1 00:04:19.729 --rc genhtml_legend=1 00:04:19.729 --rc geninfo_all_blocks=1 00:04:19.729 --rc geninfo_unexecuted_blocks=1 00:04:19.729 00:04:19.729 ' 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.729 --rc genhtml_branch_coverage=1 00:04:19.729 --rc genhtml_function_coverage=1 00:04:19.729 --rc genhtml_legend=1 00:04:19.729 --rc geninfo_all_blocks=1 00:04:19.729 --rc geninfo_unexecuted_blocks=1 00:04:19.729 00:04:19.729 ' 00:04:19.729 18:54:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:19.729 OK 00:04:19.729 18:54:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:19.729 00:04:19.729 real 0m0.221s 00:04:19.729 user 0m0.134s 00:04:19.729 sys 0m0.098s 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.729 18:54:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:19.729 ************************************ 00:04:19.729 END TEST rpc_client 00:04:19.729 ************************************ 00:04:19.729 18:54:36 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.729 18:54:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.729 18:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.729 18:54:36 -- common/autotest_common.sh@10 -- # set +x 00:04:19.990 ************************************ 00:04:19.990 START TEST json_config 00:04:19.990 ************************************ 00:04:19.990 18:54:36 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.990 18:54:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.990 18:54:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.990 18:54:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.990 18:54:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.990 18:54:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.990 18:54:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:19.990 18:54:37 json_config -- scripts/common.sh@345 -- # : 1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.990 18:54:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.990 18:54:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@353 -- # local d=1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.990 18:54:37 json_config -- scripts/common.sh@355 -- # echo 1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.990 18:54:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@353 -- # local d=2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.990 18:54:37 json_config -- scripts/common.sh@355 -- # echo 2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.990 18:54:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.990 18:54:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.990 18:54:37 json_config -- scripts/common.sh@368 -- # return 0 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.990 --rc genhtml_branch_coverage=1 00:04:19.990 --rc genhtml_function_coverage=1 00:04:19.990 --rc genhtml_legend=1 00:04:19.990 --rc geninfo_all_blocks=1 00:04:19.990 --rc geninfo_unexecuted_blocks=1 00:04:19.990 00:04:19.990 ' 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.990 --rc genhtml_branch_coverage=1 00:04:19.990 --rc genhtml_function_coverage=1 00:04:19.990 --rc genhtml_legend=1 00:04:19.990 --rc geninfo_all_blocks=1 00:04:19.990 --rc geninfo_unexecuted_blocks=1 00:04:19.990 00:04:19.990 ' 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.990 --rc genhtml_branch_coverage=1 00:04:19.990 --rc genhtml_function_coverage=1 00:04:19.990 --rc genhtml_legend=1 00:04:19.990 --rc geninfo_all_blocks=1 00:04:19.990 --rc geninfo_unexecuted_blocks=1 00:04:19.990 00:04:19.990 ' 00:04:19.990 18:54:37 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.990 --rc genhtml_branch_coverage=1 00:04:19.990 --rc genhtml_function_coverage=1 00:04:19.990 --rc genhtml_legend=1 00:04:19.990 --rc geninfo_all_blocks=1 00:04:19.990 --rc geninfo_unexecuted_blocks=1 00:04:19.990 00:04:19.990 ' 00:04:19.990 18:54:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.990 18:54:37 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.990 18:54:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.990 18:54:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.990 18:54:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.990 18:54:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.990 18:54:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.990 18:54:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.990 18:54:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.990 18:54:37 json_config -- paths/export.sh@5 -- # export PATH 00:04:19.991 18:54:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@51 -- # : 0 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.991 18:54:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:19.991 INFO: JSON configuration test init 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.991 18:54:37 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.991 18:54:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.991 18:54:37 json_config -- json_config/common.sh@10 -- # shift 00:04:19.991 18:54:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.991 18:54:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.991 18:54:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.991 18:54:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.991 18:54:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.991 18:54:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2703159 00:04:19.991 18:54:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.991 Waiting for target to run... 00:04:19.991 18:54:37 json_config -- json_config/common.sh@25 -- # waitforlisten 2703159 /var/tmp/spdk_tgt.sock 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@835 -- # '[' -z 2703159 ']' 00:04:19.991 18:54:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.991 18:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.251 [2024-11-26 18:54:37.245725] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:20.251 [2024-11-26 18:54:37.245781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2703159 ] 00:04:20.511 [2024-11-26 18:54:37.591555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.511 [2024-11-26 18:54:37.626065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:21.082 18:54:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.082 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.082 18:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.082 18:54:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:21.082 18:54:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:21.652 18:54:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.652 18:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:21.652 18:54:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@54 -- # sort 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:21.652 18:54:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:21.652 18:54:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.652 18:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:21.913 18:54:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.913 18:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:21.913 18:54:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.913 18:54:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.913 MallocForNvmf0 00:04:21.913 18:54:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.913 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.173 MallocForNvmf1 00:04:22.173 18:54:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.173 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.433 [2024-11-26 18:54:39.385678] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.433 18:54:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.433 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.433 18:54:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.433 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.693 18:54:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.693 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.954 18:54:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.954 18:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.954 [2024-11-26 18:54:40.103875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.954 18:54:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:22.954 18:54:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.954 18:54:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 18:54:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:23.215 18:54:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.215 18:54:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 18:54:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:23.215 18:54:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.215 18:54:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.215 MallocBdevForConfigChangeCheck 00:04:23.215 18:54:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:23.215 18:54:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.215 18:54:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.475 18:54:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:23.476 18:54:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.736 18:54:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:23.736 INFO: shutting down applications... 00:04:23.736 18:54:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:23.736 18:54:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:23.736 18:54:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:23.736 18:54:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.001 Calling clear_iscsi_subsystem 00:04:24.001 Calling clear_nvmf_subsystem 00:04:24.001 Calling clear_nbd_subsystem 00:04:24.001 Calling clear_ublk_subsystem 00:04:24.001 Calling clear_vhost_blk_subsystem 00:04:24.001 Calling clear_vhost_scsi_subsystem 00:04:24.001 Calling clear_bdev_subsystem 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:24.001 18:54:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.395 18:54:41 json_config -- json_config/json_config.sh@352 -- # break 00:04:24.395 18:54:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:24.395 18:54:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:24.395 18:54:41 json_config -- json_config/common.sh@31 -- # local app=target 00:04:24.395 18:54:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.395 18:54:41 json_config -- json_config/common.sh@35 -- # [[ -n 2703159 ]] 00:04:24.395 18:54:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2703159 00:04:24.395 18:54:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.395 18:54:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.395 18:54:41 json_config -- json_config/common.sh@41 -- # kill -0 2703159 00:04:24.395 18:54:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.993 18:54:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.993 18:54:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.993 18:54:42 json_config -- json_config/common.sh@41 -- # kill -0 2703159 00:04:24.993 18:54:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.993 18:54:42 json_config -- json_config/common.sh@43 -- # break 00:04:24.993 18:54:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.993 18:54:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.993 SPDK target shutdown done 00:04:24.993 18:54:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:24.993 INFO: relaunching applications... 00:04:24.993 18:54:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.993 18:54:42 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.993 18:54:42 json_config -- json_config/common.sh@10 -- # shift 00:04:24.993 18:54:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.993 18:54:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.993 18:54:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.993 18:54:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.993 18:54:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.993 18:54:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2704301 00:04:24.993 18:54:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.993 Waiting for target to run... 00:04:24.993 18:54:42 json_config -- json_config/common.sh@25 -- # waitforlisten 2704301 /var/tmp/spdk_tgt.sock 00:04:24.993 18:54:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 2704301 ']' 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.993 18:54:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.993 [2024-11-26 18:54:42.099596] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:24.993 [2024-11-26 18:54:42.099656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704301 ] 00:04:25.254 [2024-11-26 18:54:42.447467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.515 [2024-11-26 18:54:42.480710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.776 [2024-11-26 18:54:42.980004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.036 [2024-11-26 18:54:43.012345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.036 18:54:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.036 18:54:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:26.036 18:54:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.036 00:04:26.036 18:54:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:26.036 18:54:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.036 INFO: Checking if target configuration is the same... 00:04:26.036 18:54:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.036 18:54:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:26.036 18:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.036 + '[' 2 -ne 2 ']' 00:04:26.036 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.036 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.036 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.036 +++ basename /dev/fd/62 00:04:26.036 ++ mktemp /tmp/62.XXX 00:04:26.036 + tmp_file_1=/tmp/62.jtk 00:04:26.036 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.036 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.036 + tmp_file_2=/tmp/spdk_tgt_config.json.ngB 00:04:26.036 + ret=0 00:04:26.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.297 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.297 + diff -u /tmp/62.jtk /tmp/spdk_tgt_config.json.ngB 00:04:26.297 + echo 'INFO: JSON config files are the same' 00:04:26.297 INFO: JSON config files are the same 00:04:26.297 + rm /tmp/62.jtk /tmp/spdk_tgt_config.json.ngB 00:04:26.297 + exit 0 00:04:26.297 18:54:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:26.297 18:54:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:26.297 INFO: changing configuration and checking if this can be detected... 00:04:26.297 18:54:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.297 18:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.557 18:54:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:26.557 18:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.557 18:54:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.557 + '[' 2 -ne 2 ']' 00:04:26.557 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.557 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.557 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.557 +++ basename /dev/fd/62 00:04:26.557 ++ mktemp /tmp/62.XXX 00:04:26.557 + tmp_file_1=/tmp/62.7gA 00:04:26.557 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.557 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.557 + tmp_file_2=/tmp/spdk_tgt_config.json.43x 00:04:26.557 + ret=0 00:04:26.558 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.818 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.818 + diff -u /tmp/62.7gA /tmp/spdk_tgt_config.json.43x 00:04:26.818 + ret=1 00:04:26.818 + echo '=== Start of file: /tmp/62.7gA ===' 00:04:26.818 + cat /tmp/62.7gA 00:04:26.818 + echo '=== End of file: /tmp/62.7gA ===' 00:04:26.818 + echo '' 00:04:26.818 + echo '=== Start of file: /tmp/spdk_tgt_config.json.43x ===' 00:04:26.818 + cat /tmp/spdk_tgt_config.json.43x 00:04:26.818 + echo '=== End of file: /tmp/spdk_tgt_config.json.43x ===' 00:04:26.818 + echo '' 00:04:26.818 + rm /tmp/62.7gA /tmp/spdk_tgt_config.json.43x 00:04:26.818 + exit 1 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:26.818 INFO: configuration change detected. 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:26.818 18:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.818 18:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 2704301 ]] 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:26.818 18:54:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:26.818 18:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.818 18:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.079 18:54:44 json_config -- json_config/json_config.sh@330 -- # killprocess 2704301 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@954 -- # '[' -z 2704301 ']' 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@958 -- # kill -0 2704301 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@959 -- # uname 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2704301 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2704301' 00:04:27.079 killing process with pid 2704301 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@973 -- # kill 2704301 00:04:27.079 18:54:44 json_config -- common/autotest_common.sh@978 -- # wait 2704301 00:04:27.341 18:54:44 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.341 18:54:44 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:27.341 18:54:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.341 18:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.341 18:54:44 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:27.341 18:54:44 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:27.341 INFO: Success 00:04:27.341 00:04:27.341 real 0m7.485s 00:04:27.341 user 0m8.988s 00:04:27.341 sys 0m2.091s 00:04:27.341 18:54:44 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.341 18:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.341 ************************************ 00:04:27.341 END TEST json_config 00:04:27.341 ************************************ 00:04:27.341 18:54:44 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.341 18:54:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.341 18:54:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.341 18:54:44 -- common/autotest_common.sh@10 -- # set +x 00:04:27.341 ************************************ 00:04:27.341 START TEST json_config_extra_key 00:04:27.341 ************************************ 00:04:27.341 18:54:44 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.603 --rc genhtml_branch_coverage=1 00:04:27.603 --rc genhtml_function_coverage=1 00:04:27.603 --rc genhtml_legend=1 00:04:27.603 --rc geninfo_all_blocks=1 00:04:27.603 --rc geninfo_unexecuted_blocks=1 00:04:27.603 00:04:27.603 ' 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.603 --rc genhtml_branch_coverage=1 00:04:27.603 --rc genhtml_function_coverage=1 00:04:27.603 --rc genhtml_legend=1 00:04:27.603 --rc geninfo_all_blocks=1 00:04:27.603 --rc geninfo_unexecuted_blocks=1 00:04:27.603 00:04:27.603 ' 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.603 --rc genhtml_branch_coverage=1 00:04:27.603 --rc genhtml_function_coverage=1 00:04:27.603 --rc genhtml_legend=1 00:04:27.603 --rc geninfo_all_blocks=1 00:04:27.603 --rc geninfo_unexecuted_blocks=1 00:04:27.603 00:04:27.603 ' 00:04:27.603 18:54:44 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.603 --rc genhtml_branch_coverage=1 00:04:27.603 --rc genhtml_function_coverage=1 00:04:27.603 --rc genhtml_legend=1 00:04:27.603 --rc geninfo_all_blocks=1 00:04:27.603 --rc geninfo_unexecuted_blocks=1 00:04:27.603 00:04:27.603 ' 00:04:27.603 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.603 18:54:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.603 18:54:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.603 18:54:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.603 18:54:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.603 18:54:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.604 18:54:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:27.604 18:54:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.604 18:54:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:27.604 INFO: launching applications... 00:04:27.604 18:54:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2705059 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.604 Waiting for target to run... 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2705059 /var/tmp/spdk_tgt.sock 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2705059 ']' 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.604 18:54:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.604 18:54:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:27.604 [2024-11-26 18:54:44.806379] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:27.604 [2024-11-26 18:54:44.806456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705059 ] 00:04:28.176 [2024-11-26 18:54:45.123791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.176 [2024-11-26 18:54:45.153853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.435 18:54:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.435 18:54:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:28.435 00:04:28.435 18:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:28.435 INFO: shutting down applications... 00:04:28.435 18:54:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2705059 ]] 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2705059 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2705059 00:04:28.435 18:54:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2705059 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.006 18:54:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.006 SPDK target shutdown done 00:04:29.006 18:54:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:29.006 Success 00:04:29.006 00:04:29.006 real 0m1.571s 00:04:29.006 user 0m1.154s 00:04:29.006 sys 0m0.442s 00:04:29.006 18:54:46 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.006 18:54:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.006 ************************************ 00:04:29.006 END TEST json_config_extra_key 00:04:29.006 ************************************ 00:04:29.006 18:54:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.006 18:54:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.006 18:54:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.006 18:54:46 -- common/autotest_common.sh@10 -- # set +x 00:04:29.006 ************************************ 00:04:29.006 START TEST alias_rpc 00:04:29.006 ************************************ 00:04:29.006 18:54:46 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:29.268 * Looking for test storage... 00:04:29.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.268 18:54:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.268 --rc genhtml_branch_coverage=1 00:04:29.268 --rc genhtml_function_coverage=1 00:04:29.268 --rc genhtml_legend=1 00:04:29.268 --rc geninfo_all_blocks=1 00:04:29.268 --rc geninfo_unexecuted_blocks=1 00:04:29.268 00:04:29.268 ' 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.268 --rc genhtml_branch_coverage=1 00:04:29.268 --rc genhtml_function_coverage=1 00:04:29.268 --rc genhtml_legend=1 00:04:29.268 --rc geninfo_all_blocks=1 00:04:29.268 --rc geninfo_unexecuted_blocks=1 00:04:29.268 00:04:29.268 ' 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.268 --rc genhtml_branch_coverage=1 00:04:29.268 --rc genhtml_function_coverage=1 00:04:29.268 --rc genhtml_legend=1 00:04:29.268 --rc geninfo_all_blocks=1 00:04:29.268 --rc geninfo_unexecuted_blocks=1 00:04:29.268 00:04:29.268 ' 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.268 --rc genhtml_branch_coverage=1 00:04:29.268 --rc genhtml_function_coverage=1 00:04:29.268 --rc genhtml_legend=1 00:04:29.268 --rc geninfo_all_blocks=1 00:04:29.268 --rc geninfo_unexecuted_blocks=1 00:04:29.268 00:04:29.268 ' 00:04:29.268 18:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:29.268 18:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2705419 00:04:29.268 18:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2705419 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2705419 ']' 00:04:29.268 18:54:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.268 18:54:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.268 [2024-11-26 18:54:46.453696] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:29.268 [2024-11-26 18:54:46.453774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705419 ] 00:04:29.529 [2024-11-26 18:54:46.541435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.529 [2024-11-26 18:54:46.576340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.125 18:54:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.125 18:54:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.125 18:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:30.386 18:54:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2705419 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2705419 ']' 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2705419 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705419 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705419' 00:04:30.386 killing process with pid 2705419 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@973 -- # kill 2705419 00:04:30.386 18:54:47 alias_rpc -- common/autotest_common.sh@978 -- # wait 2705419 00:04:30.647 00:04:30.647 real 0m1.482s 00:04:30.647 user 0m1.600s 00:04:30.647 sys 0m0.428s 00:04:30.647 18:54:47 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.647 18:54:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.647 ************************************ 00:04:30.647 END TEST alias_rpc 00:04:30.647 ************************************ 00:04:30.647 18:54:47 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:30.647 18:54:47 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:30.647 18:54:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.647 18:54:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.647 18:54:47 -- common/autotest_common.sh@10 -- # set +x 00:04:30.647 ************************************ 00:04:30.647 START TEST spdkcli_tcp 00:04:30.647 ************************************ 00:04:30.647 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:30.647 * Looking for test storage... 00:04:30.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:30.647 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.647 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.647 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.909 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.909 18:54:47 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:30.909 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.909 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.909 --rc genhtml_branch_coverage=1 00:04:30.909 --rc genhtml_function_coverage=1 00:04:30.909 --rc genhtml_legend=1 00:04:30.909 --rc geninfo_all_blocks=1 00:04:30.909 --rc geninfo_unexecuted_blocks=1 00:04:30.909 00:04:30.909 ' 00:04:30.909 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.909 --rc genhtml_branch_coverage=1 00:04:30.909 --rc genhtml_function_coverage=1 00:04:30.909 --rc genhtml_legend=1 00:04:30.909 --rc geninfo_all_blocks=1 00:04:30.909 --rc geninfo_unexecuted_blocks=1 00:04:30.909 00:04:30.909 ' 00:04:30.909 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.909 --rc genhtml_branch_coverage=1 00:04:30.909 --rc genhtml_function_coverage=1 00:04:30.909 --rc genhtml_legend=1 00:04:30.909 --rc geninfo_all_blocks=1 00:04:30.910 --rc geninfo_unexecuted_blocks=1 00:04:30.910 00:04:30.910 ' 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.910 --rc genhtml_branch_coverage=1 00:04:30.910 --rc genhtml_function_coverage=1 00:04:30.910 --rc genhtml_legend=1 00:04:30.910 --rc geninfo_all_blocks=1 00:04:30.910 --rc geninfo_unexecuted_blocks=1 00:04:30.910 00:04:30.910 ' 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2705733 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2705733 00:04:30.910 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2705733 ']' 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.910 18:54:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:30.910 [2024-11-26 18:54:48.020847] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:30.910 [2024-11-26 18:54:48.020921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2705733 ] 00:04:30.910 [2024-11-26 18:54:48.107353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.170 [2024-11-26 18:54:48.149339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.170 [2024-11-26 18:54:48.149477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.741 18:54:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.741 18:54:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:31.742 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2705899 00:04:31.742 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:31.742 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:32.002 [ 00:04:32.002 "bdev_malloc_delete", 00:04:32.002 "bdev_malloc_create", 00:04:32.002 "bdev_null_resize", 00:04:32.002 "bdev_null_delete", 00:04:32.002 "bdev_null_create", 00:04:32.002 "bdev_nvme_cuse_unregister", 00:04:32.002 "bdev_nvme_cuse_register", 00:04:32.002 "bdev_opal_new_user", 00:04:32.002 "bdev_opal_set_lock_state", 00:04:32.002 "bdev_opal_delete", 00:04:32.002 "bdev_opal_get_info", 00:04:32.002 "bdev_opal_create", 00:04:32.002 "bdev_nvme_opal_revert", 00:04:32.002 "bdev_nvme_opal_init", 00:04:32.002 "bdev_nvme_send_cmd", 00:04:32.002 "bdev_nvme_set_keys", 00:04:32.002 "bdev_nvme_get_path_iostat", 00:04:32.002 "bdev_nvme_get_mdns_discovery_info", 00:04:32.002 "bdev_nvme_stop_mdns_discovery", 00:04:32.002 "bdev_nvme_start_mdns_discovery", 00:04:32.002 "bdev_nvme_set_multipath_policy", 00:04:32.002 "bdev_nvme_set_preferred_path", 00:04:32.002 "bdev_nvme_get_io_paths", 00:04:32.002 "bdev_nvme_remove_error_injection", 00:04:32.002 "bdev_nvme_add_error_injection", 00:04:32.002 "bdev_nvme_get_discovery_info", 00:04:32.002 "bdev_nvme_stop_discovery", 00:04:32.002 "bdev_nvme_start_discovery", 00:04:32.002 "bdev_nvme_get_controller_health_info", 00:04:32.002 "bdev_nvme_disable_controller", 00:04:32.002 "bdev_nvme_enable_controller", 00:04:32.002 "bdev_nvme_reset_controller", 00:04:32.002 "bdev_nvme_get_transport_statistics", 00:04:32.002 "bdev_nvme_apply_firmware", 00:04:32.002 "bdev_nvme_detach_controller", 00:04:32.002 "bdev_nvme_get_controllers", 00:04:32.002 "bdev_nvme_attach_controller", 00:04:32.002 "bdev_nvme_set_hotplug", 00:04:32.002 "bdev_nvme_set_options", 00:04:32.002 "bdev_passthru_delete", 00:04:32.002 "bdev_passthru_create", 00:04:32.002 "bdev_lvol_set_parent_bdev", 00:04:32.002 "bdev_lvol_set_parent", 00:04:32.002 "bdev_lvol_check_shallow_copy", 00:04:32.002 "bdev_lvol_start_shallow_copy", 00:04:32.002 "bdev_lvol_grow_lvstore", 00:04:32.002 "bdev_lvol_get_lvols", 00:04:32.002 "bdev_lvol_get_lvstores", 00:04:32.002 "bdev_lvol_delete", 00:04:32.002 "bdev_lvol_set_read_only", 00:04:32.002 "bdev_lvol_resize", 00:04:32.002 "bdev_lvol_decouple_parent", 00:04:32.002 "bdev_lvol_inflate", 00:04:32.002 "bdev_lvol_rename", 00:04:32.002 "bdev_lvol_clone_bdev", 00:04:32.002 "bdev_lvol_clone", 00:04:32.002 "bdev_lvol_snapshot", 00:04:32.002 "bdev_lvol_create", 00:04:32.002 "bdev_lvol_delete_lvstore", 00:04:32.002 "bdev_lvol_rename_lvstore", 00:04:32.002 "bdev_lvol_create_lvstore", 00:04:32.002 "bdev_raid_set_options", 00:04:32.002 "bdev_raid_remove_base_bdev", 00:04:32.002 "bdev_raid_add_base_bdev", 00:04:32.002 "bdev_raid_delete", 00:04:32.002 "bdev_raid_create", 00:04:32.002 "bdev_raid_get_bdevs", 00:04:32.002 "bdev_error_inject_error", 00:04:32.002 "bdev_error_delete", 00:04:32.002 "bdev_error_create", 00:04:32.002 "bdev_split_delete", 00:04:32.002 "bdev_split_create", 00:04:32.002 "bdev_delay_delete", 00:04:32.002 "bdev_delay_create", 00:04:32.002 "bdev_delay_update_latency", 00:04:32.002 "bdev_zone_block_delete", 00:04:32.002 "bdev_zone_block_create", 00:04:32.002 "blobfs_create", 00:04:32.002 "blobfs_detect", 00:04:32.002 "blobfs_set_cache_size", 00:04:32.002 "bdev_aio_delete", 00:04:32.002 "bdev_aio_rescan", 00:04:32.002 "bdev_aio_create", 00:04:32.002 "bdev_ftl_set_property", 00:04:32.002 "bdev_ftl_get_properties", 00:04:32.002 "bdev_ftl_get_stats", 00:04:32.002 "bdev_ftl_unmap", 00:04:32.003 "bdev_ftl_unload", 00:04:32.003 "bdev_ftl_delete", 00:04:32.003 "bdev_ftl_load", 00:04:32.003 "bdev_ftl_create", 00:04:32.003 "bdev_virtio_attach_controller", 00:04:32.003 "bdev_virtio_scsi_get_devices", 00:04:32.003 "bdev_virtio_detach_controller", 00:04:32.003 "bdev_virtio_blk_set_hotplug", 00:04:32.003 "bdev_iscsi_delete", 00:04:32.003 "bdev_iscsi_create", 00:04:32.003 "bdev_iscsi_set_options", 00:04:32.003 "accel_error_inject_error", 00:04:32.003 "ioat_scan_accel_module", 00:04:32.003 "dsa_scan_accel_module", 00:04:32.003 "iaa_scan_accel_module", 00:04:32.003 "vfu_virtio_create_fs_endpoint", 00:04:32.003 "vfu_virtio_create_scsi_endpoint", 00:04:32.003 "vfu_virtio_scsi_remove_target", 00:04:32.003 "vfu_virtio_scsi_add_target", 00:04:32.003 "vfu_virtio_create_blk_endpoint", 00:04:32.003 "vfu_virtio_delete_endpoint", 00:04:32.003 "keyring_file_remove_key", 00:04:32.003 "keyring_file_add_key", 00:04:32.003 "keyring_linux_set_options", 00:04:32.003 "fsdev_aio_delete", 00:04:32.003 "fsdev_aio_create", 00:04:32.003 "iscsi_get_histogram", 00:04:32.003 "iscsi_enable_histogram", 00:04:32.003 "iscsi_set_options", 00:04:32.003 "iscsi_get_auth_groups", 00:04:32.003 "iscsi_auth_group_remove_secret", 00:04:32.003 "iscsi_auth_group_add_secret", 00:04:32.003 "iscsi_delete_auth_group", 00:04:32.003 "iscsi_create_auth_group", 00:04:32.003 "iscsi_set_discovery_auth", 00:04:32.003 "iscsi_get_options", 00:04:32.003 "iscsi_target_node_request_logout", 00:04:32.003 "iscsi_target_node_set_redirect", 00:04:32.003 "iscsi_target_node_set_auth", 00:04:32.003 "iscsi_target_node_add_lun", 00:04:32.003 "iscsi_get_stats", 00:04:32.003 "iscsi_get_connections", 00:04:32.003 "iscsi_portal_group_set_auth", 00:04:32.003 "iscsi_start_portal_group", 00:04:32.003 "iscsi_delete_portal_group", 00:04:32.003 "iscsi_create_portal_group", 00:04:32.003 "iscsi_get_portal_groups", 00:04:32.003 "iscsi_delete_target_node", 00:04:32.003 "iscsi_target_node_remove_pg_ig_maps", 00:04:32.003 "iscsi_target_node_add_pg_ig_maps", 00:04:32.003 "iscsi_create_target_node", 00:04:32.003 "iscsi_get_target_nodes", 00:04:32.003 "iscsi_delete_initiator_group", 00:04:32.003 "iscsi_initiator_group_remove_initiators", 00:04:32.003 "iscsi_initiator_group_add_initiators", 00:04:32.003 "iscsi_create_initiator_group", 00:04:32.003 "iscsi_get_initiator_groups", 00:04:32.003 "nvmf_set_crdt", 00:04:32.003 "nvmf_set_config", 00:04:32.003 "nvmf_set_max_subsystems", 00:04:32.003 "nvmf_stop_mdns_prr", 00:04:32.003 "nvmf_publish_mdns_prr", 00:04:32.003 "nvmf_subsystem_get_listeners", 00:04:32.003 "nvmf_subsystem_get_qpairs", 00:04:32.003 "nvmf_subsystem_get_controllers", 00:04:32.003 "nvmf_get_stats", 00:04:32.003 "nvmf_get_transports", 00:04:32.003 "nvmf_create_transport", 00:04:32.003 "nvmf_get_targets", 00:04:32.003 "nvmf_delete_target", 00:04:32.003 "nvmf_create_target", 00:04:32.003 "nvmf_subsystem_allow_any_host", 00:04:32.003 "nvmf_subsystem_set_keys", 00:04:32.003 "nvmf_subsystem_remove_host", 00:04:32.003 "nvmf_subsystem_add_host", 00:04:32.003 "nvmf_ns_remove_host", 00:04:32.003 "nvmf_ns_add_host", 00:04:32.003 "nvmf_subsystem_remove_ns", 00:04:32.003 "nvmf_subsystem_set_ns_ana_group", 00:04:32.003 "nvmf_subsystem_add_ns", 00:04:32.003 "nvmf_subsystem_listener_set_ana_state", 00:04:32.003 "nvmf_discovery_get_referrals", 00:04:32.003 "nvmf_discovery_remove_referral", 00:04:32.003 "nvmf_discovery_add_referral", 00:04:32.003 "nvmf_subsystem_remove_listener", 00:04:32.003 "nvmf_subsystem_add_listener", 00:04:32.003 "nvmf_delete_subsystem", 00:04:32.003 "nvmf_create_subsystem", 00:04:32.003 "nvmf_get_subsystems", 00:04:32.003 "env_dpdk_get_mem_stats", 00:04:32.003 "nbd_get_disks", 00:04:32.003 "nbd_stop_disk", 00:04:32.003 "nbd_start_disk", 00:04:32.003 "ublk_recover_disk", 00:04:32.003 "ublk_get_disks", 00:04:32.003 "ublk_stop_disk", 00:04:32.003 "ublk_start_disk", 00:04:32.003 "ublk_destroy_target", 00:04:32.003 "ublk_create_target", 00:04:32.003 "virtio_blk_create_transport", 00:04:32.003 "virtio_blk_get_transports", 00:04:32.003 "vhost_controller_set_coalescing", 00:04:32.003 "vhost_get_controllers", 00:04:32.003 "vhost_delete_controller", 00:04:32.003 "vhost_create_blk_controller", 00:04:32.003 "vhost_scsi_controller_remove_target", 00:04:32.003 "vhost_scsi_controller_add_target", 00:04:32.003 "vhost_start_scsi_controller", 00:04:32.003 "vhost_create_scsi_controller", 00:04:32.003 "thread_set_cpumask", 00:04:32.003 "scheduler_set_options", 00:04:32.003 "framework_get_governor", 00:04:32.003 "framework_get_scheduler", 00:04:32.003 "framework_set_scheduler", 00:04:32.003 "framework_get_reactors", 00:04:32.003 "thread_get_io_channels", 00:04:32.003 "thread_get_pollers", 00:04:32.003 "thread_get_stats", 00:04:32.003 "framework_monitor_context_switch", 00:04:32.003 "spdk_kill_instance", 00:04:32.003 "log_enable_timestamps", 00:04:32.003 "log_get_flags", 00:04:32.003 "log_clear_flag", 00:04:32.003 "log_set_flag", 00:04:32.003 "log_get_level", 00:04:32.003 "log_set_level", 00:04:32.003 "log_get_print_level", 00:04:32.003 "log_set_print_level", 00:04:32.003 "framework_enable_cpumask_locks", 00:04:32.003 "framework_disable_cpumask_locks", 00:04:32.003 "framework_wait_init", 00:04:32.003 "framework_start_init", 00:04:32.003 "scsi_get_devices", 00:04:32.003 "bdev_get_histogram", 00:04:32.003 "bdev_enable_histogram", 00:04:32.003 "bdev_set_qos_limit", 00:04:32.003 "bdev_set_qd_sampling_period", 00:04:32.003 "bdev_get_bdevs", 00:04:32.003 "bdev_reset_iostat", 00:04:32.003 "bdev_get_iostat", 00:04:32.003 "bdev_examine", 00:04:32.003 "bdev_wait_for_examine", 00:04:32.003 "bdev_set_options", 00:04:32.003 "accel_get_stats", 00:04:32.003 "accel_set_options", 00:04:32.003 "accel_set_driver", 00:04:32.003 "accel_crypto_key_destroy", 00:04:32.003 "accel_crypto_keys_get", 00:04:32.003 "accel_crypto_key_create", 00:04:32.003 "accel_assign_opc", 00:04:32.003 "accel_get_module_info", 00:04:32.003 "accel_get_opc_assignments", 00:04:32.003 "vmd_rescan", 00:04:32.003 "vmd_remove_device", 00:04:32.003 "vmd_enable", 00:04:32.003 "sock_get_default_impl", 00:04:32.003 "sock_set_default_impl", 00:04:32.003 "sock_impl_set_options", 00:04:32.003 "sock_impl_get_options", 00:04:32.003 "iobuf_get_stats", 00:04:32.003 "iobuf_set_options", 00:04:32.003 "keyring_get_keys", 00:04:32.003 "vfu_tgt_set_base_path", 00:04:32.003 "framework_get_pci_devices", 00:04:32.003 "framework_get_config", 00:04:32.003 "framework_get_subsystems", 00:04:32.003 "fsdev_set_opts", 00:04:32.003 "fsdev_get_opts", 00:04:32.003 "trace_get_info", 00:04:32.003 "trace_get_tpoint_group_mask", 00:04:32.003 "trace_disable_tpoint_group", 00:04:32.004 "trace_enable_tpoint_group", 00:04:32.004 "trace_clear_tpoint_mask", 00:04:32.004 "trace_set_tpoint_mask", 00:04:32.004 "notify_get_notifications", 00:04:32.004 "notify_get_types", 00:04:32.004 "spdk_get_version", 00:04:32.004 "rpc_get_methods" 00:04:32.004 ] 00:04:32.004 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:32.004 18:54:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.004 18:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.004 18:54:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:32.004 18:54:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2705733 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2705733 ']' 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2705733 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705733 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705733' 00:04:32.004 killing process with pid 2705733 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2705733 00:04:32.004 18:54:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2705733 00:04:32.264 00:04:32.264 real 0m1.535s 00:04:32.264 user 0m2.789s 00:04:32.264 sys 0m0.473s 00:04:32.264 18:54:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.264 18:54:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.264 ************************************ 00:04:32.264 END TEST spdkcli_tcp 00:04:32.264 ************************************ 00:04:32.264 18:54:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.264 18:54:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.264 18:54:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.264 18:54:49 -- common/autotest_common.sh@10 -- # set +x 00:04:32.264 ************************************ 00:04:32.264 START TEST dpdk_mem_utility 00:04:32.264 ************************************ 00:04:32.264 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:32.264 * Looking for test storage... 00:04:32.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:32.264 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.264 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.264 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.525 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.525 18:54:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:32.525 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.525 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.525 --rc genhtml_branch_coverage=1 00:04:32.525 --rc genhtml_function_coverage=1 00:04:32.525 --rc genhtml_legend=1 00:04:32.525 --rc geninfo_all_blocks=1 00:04:32.525 --rc geninfo_unexecuted_blocks=1 00:04:32.525 00:04:32.525 ' 00:04:32.525 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.525 --rc genhtml_branch_coverage=1 00:04:32.525 --rc genhtml_function_coverage=1 00:04:32.525 --rc genhtml_legend=1 00:04:32.525 --rc geninfo_all_blocks=1 00:04:32.525 --rc geninfo_unexecuted_blocks=1 00:04:32.525 00:04:32.525 ' 00:04:32.525 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.525 --rc genhtml_branch_coverage=1 00:04:32.525 --rc genhtml_function_coverage=1 00:04:32.525 --rc genhtml_legend=1 00:04:32.525 --rc geninfo_all_blocks=1 00:04:32.525 --rc geninfo_unexecuted_blocks=1 00:04:32.525 00:04:32.525 ' 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.526 --rc genhtml_branch_coverage=1 00:04:32.526 --rc genhtml_function_coverage=1 00:04:32.526 --rc genhtml_legend=1 00:04:32.526 --rc geninfo_all_blocks=1 00:04:32.526 --rc geninfo_unexecuted_blocks=1 00:04:32.526 00:04:32.526 ' 00:04:32.526 18:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:32.526 18:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2706103 00:04:32.526 18:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2706103 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2706103 ']' 00:04:32.526 18:54:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.526 18:54:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.526 [2024-11-26 18:54:49.626761] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:32.526 [2024-11-26 18:54:49.626840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706103 ] 00:04:32.526 [2024-11-26 18:54:49.714573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.786 [2024-11-26 18:54:49.747747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.361 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.361 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:33.361 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.362 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.362 { 00:04:33.362 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.362 } 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.362 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.362 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:33.362 1 heaps totaling size 818.000000 MiB 00:04:33.362 size: 818.000000 MiB heap id: 0 00:04:33.362 end heaps---------- 00:04:33.362 9 mempools totaling size 603.782043 MiB 00:04:33.362 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.362 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.362 size: 100.555481 MiB name: bdev_io_2706103 00:04:33.362 size: 50.003479 MiB name: msgpool_2706103 00:04:33.362 size: 36.509338 MiB name: fsdev_io_2706103 00:04:33.362 size: 21.763794 MiB name: PDU_Pool 00:04:33.362 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.362 size: 4.133484 MiB name: evtpool_2706103 00:04:33.362 size: 0.026123 MiB name: Session_Pool 00:04:33.362 end mempools------- 00:04:33.362 6 memzones totaling size 4.142822 MiB 00:04:33.362 size: 1.000366 MiB name: RG_ring_0_2706103 00:04:33.362 size: 1.000366 MiB name: RG_ring_1_2706103 00:04:33.362 size: 1.000366 MiB name: RG_ring_4_2706103 00:04:33.362 size: 1.000366 MiB name: RG_ring_5_2706103 00:04:33.362 size: 0.125366 MiB name: RG_ring_2_2706103 00:04:33.362 size: 0.015991 MiB name: RG_ring_3_2706103 00:04:33.362 end memzones------- 00:04:33.362 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.362 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:33.362 list of free elements. size: 10.852478 MiB 00:04:33.362 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:33.362 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:33.362 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:33.362 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:33.362 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:33.362 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:33.362 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:33.362 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:33.362 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:33.362 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:33.362 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:33.362 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:33.362 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:33.362 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:33.362 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:33.362 list of standard malloc elements. size: 199.218628 MiB 00:04:33.362 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:33.362 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:33.362 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:33.362 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:33.362 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:33.362 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:33.362 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:33.362 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:33.362 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:33.362 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:33.362 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:33.362 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:33.362 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:33.362 list of memzone associated elements. size: 607.928894 MiB 00:04:33.362 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:33.362 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.362 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:33.362 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.362 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:33.362 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2706103_0 00:04:33.362 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:33.362 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2706103_0 00:04:33.362 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:33.362 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2706103_0 00:04:33.362 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:33.362 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.362 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:33.362 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.362 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:33.362 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2706103_0 00:04:33.362 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:33.362 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2706103 00:04:33.362 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:33.362 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2706103 00:04:33.362 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:33.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.362 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:33.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.362 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:33.362 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.362 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:33.362 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.362 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:33.362 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2706103 00:04:33.362 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:33.362 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2706103 00:04:33.362 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:33.362 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2706103 00:04:33.362 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:33.362 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2706103 00:04:33.362 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:33.362 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2706103 00:04:33.362 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:33.362 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2706103 00:04:33.362 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:33.362 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.362 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:33.362 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.362 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:33.362 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.362 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:33.362 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2706103 00:04:33.362 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:33.362 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2706103 00:04:33.362 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:33.362 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.362 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:33.362 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.362 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:33.362 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2706103 00:04:33.362 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:33.362 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.362 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:33.362 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2706103 00:04:33.362 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:33.362 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2706103 00:04:33.362 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:33.362 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2706103 00:04:33.362 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:33.362 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.362 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.362 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2706103 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2706103 ']' 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2706103 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2706103 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.362 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.621 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2706103' 00:04:33.621 killing process with pid 2706103 00:04:33.621 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2706103 00:04:33.621 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2706103 00:04:33.621 00:04:33.621 real 0m1.396s 00:04:33.621 user 0m1.439s 00:04:33.621 sys 0m0.431s 00:04:33.621 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.621 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.621 ************************************ 00:04:33.621 END TEST dpdk_mem_utility 00:04:33.621 ************************************ 00:04:33.621 18:54:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.621 18:54:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.621 18:54:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.621 18:54:50 -- common/autotest_common.sh@10 -- # set +x 00:04:33.882 ************************************ 00:04:33.882 START TEST event 00:04:33.882 ************************************ 00:04:33.882 18:54:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:33.882 * Looking for test storage... 00:04:33.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:33.882 18:54:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.882 18:54:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.882 18:54:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.882 18:54:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.882 18:54:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.882 18:54:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.882 18:54:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.882 18:54:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.882 18:54:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.882 18:54:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.882 18:54:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.882 18:54:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.882 18:54:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.882 18:54:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.882 18:54:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.882 18:54:51 event -- scripts/common.sh@344 -- # case "$op" in 00:04:33.882 18:54:51 event -- scripts/common.sh@345 -- # : 1 00:04:33.882 18:54:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.882 18:54:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.882 18:54:51 event -- scripts/common.sh@365 -- # decimal 1 00:04:33.882 18:54:51 event -- scripts/common.sh@353 -- # local d=1 00:04:33.882 18:54:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.882 18:54:51 event -- scripts/common.sh@355 -- # echo 1 00:04:33.882 18:54:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.882 18:54:51 event -- scripts/common.sh@366 -- # decimal 2 00:04:33.882 18:54:51 event -- scripts/common.sh@353 -- # local d=2 00:04:33.882 18:54:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.882 18:54:51 event -- scripts/common.sh@355 -- # echo 2 00:04:33.882 18:54:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.882 18:54:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.882 18:54:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.882 18:54:51 event -- scripts/common.sh@368 -- # return 0 00:04:33.882 18:54:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.882 18:54:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.882 --rc genhtml_branch_coverage=1 00:04:33.882 --rc genhtml_function_coverage=1 00:04:33.882 --rc genhtml_legend=1 00:04:33.882 --rc geninfo_all_blocks=1 00:04:33.882 --rc geninfo_unexecuted_blocks=1 00:04:33.882 00:04:33.882 ' 00:04:33.882 18:54:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.882 --rc genhtml_branch_coverage=1 00:04:33.882 --rc genhtml_function_coverage=1 00:04:33.882 --rc genhtml_legend=1 00:04:33.882 --rc geninfo_all_blocks=1 00:04:33.882 --rc geninfo_unexecuted_blocks=1 00:04:33.882 00:04:33.882 ' 00:04:33.882 18:54:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.883 --rc genhtml_branch_coverage=1 00:04:33.883 --rc genhtml_function_coverage=1 00:04:33.883 --rc genhtml_legend=1 00:04:33.883 --rc geninfo_all_blocks=1 00:04:33.883 --rc geninfo_unexecuted_blocks=1 00:04:33.883 00:04:33.883 ' 00:04:33.883 18:54:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.883 --rc genhtml_branch_coverage=1 00:04:33.883 --rc genhtml_function_coverage=1 00:04:33.883 --rc genhtml_legend=1 00:04:33.883 --rc geninfo_all_blocks=1 00:04:33.883 --rc geninfo_unexecuted_blocks=1 00:04:33.883 00:04:33.883 ' 00:04:33.883 18:54:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:33.883 18:54:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:33.883 18:54:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.883 18:54:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:33.883 18:54:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.883 18:54:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.883 ************************************ 00:04:33.883 START TEST event_perf 00:04:33.883 ************************************ 00:04:33.883 18:54:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.144 Running I/O for 1 seconds...[2024-11-26 18:54:51.100398] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:34.144 [2024-11-26 18:54:51.100518] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706403 ] 00:04:34.144 [2024-11-26 18:54:51.191187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.144 [2024-11-26 18:54:51.234947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.144 [2024-11-26 18:54:51.235103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.144 [2024-11-26 18:54:51.235263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.144 Running I/O for 1 seconds...[2024-11-26 18:54:51.235264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.086 00:04:35.086 lcore 0: 177295 00:04:35.086 lcore 1: 177297 00:04:35.086 lcore 2: 177298 00:04:35.086 lcore 3: 177296 00:04:35.086 done. 00:04:35.086 00:04:35.086 real 0m1.185s 00:04:35.086 user 0m4.085s 00:04:35.086 sys 0m0.097s 00:04:35.086 18:54:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.086 18:54:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.086 ************************************ 00:04:35.086 END TEST event_perf 00:04:35.086 ************************************ 00:04:35.348 18:54:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.348 18:54:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:35.348 18:54:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.348 18:54:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.348 ************************************ 00:04:35.348 START TEST event_reactor 00:04:35.348 ************************************ 00:04:35.348 18:54:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.348 [2024-11-26 18:54:52.362128] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:35.348 [2024-11-26 18:54:52.362250] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706733 ] 00:04:35.348 [2024-11-26 18:54:52.449657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.348 [2024-11-26 18:54:52.484128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.733 test_start 00:04:36.733 oneshot 00:04:36.733 tick 100 00:04:36.733 tick 100 00:04:36.733 tick 250 00:04:36.733 tick 100 00:04:36.733 tick 100 00:04:36.733 tick 100 00:04:36.733 tick 250 00:04:36.733 tick 500 00:04:36.733 tick 100 00:04:36.733 tick 100 00:04:36.733 tick 250 00:04:36.733 tick 100 00:04:36.733 tick 100 00:04:36.733 test_end 00:04:36.733 00:04:36.733 real 0m1.170s 00:04:36.733 user 0m1.088s 00:04:36.733 sys 0m0.078s 00:04:36.733 18:54:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.733 18:54:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:36.733 ************************************ 00:04:36.733 END TEST event_reactor 00:04:36.733 ************************************ 00:04:36.733 18:54:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.733 18:54:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.733 18:54:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.733 18:54:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.733 ************************************ 00:04:36.733 START TEST event_reactor_perf 00:04:36.733 ************************************ 00:04:36.733 18:54:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.733 [2024-11-26 18:54:53.612256] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:36.733 [2024-11-26 18:54:53.612356] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707084 ] 00:04:36.733 [2024-11-26 18:54:53.701188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.733 [2024-11-26 18:54:53.738246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.674 test_start 00:04:37.674 test_end 00:04:37.674 Performance: 534793 events per second 00:04:37.674 00:04:37.674 real 0m1.174s 00:04:37.674 user 0m1.088s 00:04:37.674 sys 0m0.081s 00:04:37.674 18:54:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.674 18:54:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.674 ************************************ 00:04:37.674 END TEST event_reactor_perf 00:04:37.674 ************************************ 00:04:37.675 18:54:54 event -- event/event.sh@49 -- # uname -s 00:04:37.675 18:54:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:37.675 18:54:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:37.675 18:54:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.675 18:54:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.675 18:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.675 ************************************ 00:04:37.675 START TEST event_scheduler 00:04:37.675 ************************************ 00:04:37.675 18:54:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:37.936 * Looking for test storage... 00:04:37.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:37.936 18:54:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.936 18:54:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.936 18:54:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.936 18:54:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.936 --rc genhtml_branch_coverage=1 00:04:37.936 --rc genhtml_function_coverage=1 00:04:37.936 --rc genhtml_legend=1 00:04:37.936 --rc geninfo_all_blocks=1 00:04:37.936 --rc geninfo_unexecuted_blocks=1 00:04:37.936 00:04:37.936 ' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.936 --rc genhtml_branch_coverage=1 00:04:37.936 --rc genhtml_function_coverage=1 00:04:37.936 --rc genhtml_legend=1 00:04:37.936 --rc geninfo_all_blocks=1 00:04:37.936 --rc geninfo_unexecuted_blocks=1 00:04:37.936 00:04:37.936 ' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.936 --rc genhtml_branch_coverage=1 00:04:37.936 --rc genhtml_function_coverage=1 00:04:37.936 --rc genhtml_legend=1 00:04:37.936 --rc geninfo_all_blocks=1 00:04:37.936 --rc geninfo_unexecuted_blocks=1 00:04:37.936 00:04:37.936 ' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.936 --rc genhtml_branch_coverage=1 00:04:37.936 --rc genhtml_function_coverage=1 00:04:37.936 --rc genhtml_legend=1 00:04:37.936 --rc geninfo_all_blocks=1 00:04:37.936 --rc geninfo_unexecuted_blocks=1 00:04:37.936 00:04:37.936 ' 00:04:37.936 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:37.936 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2707471 00:04:37.936 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.936 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2707471 00:04:37.936 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2707471 ']' 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.936 18:54:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.936 [2024-11-26 18:54:55.101751] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:37.936 [2024-11-26 18:54:55.101832] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707471 ] 00:04:38.197 [2024-11-26 18:54:55.193191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.197 [2024-11-26 18:54:55.248475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.197 [2024-11-26 18:54:55.248610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.197 [2024-11-26 18:54:55.248771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.197 [2024-11-26 18:54:55.248771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.767 18:54:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.767 18:54:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:38.767 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:38.767 18:54:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.768 18:54:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.768 [2024-11-26 18:54:55.919237] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:38.768 [2024-11-26 18:54:55.919257] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:38.768 [2024-11-26 18:54:55.919268] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:38.768 [2024-11-26 18:54:55.919274] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:38.768 [2024-11-26 18:54:55.919279] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:38.768 18:54:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.768 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:38.768 18:54:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.768 18:54:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 [2024-11-26 18:54:55.986991] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:39.029 18:54:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:39.029 18:54:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.029 18:54:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.029 18:54:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 ************************************ 00:04:39.029 START TEST scheduler_create_thread 00:04:39.029 ************************************ 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 2 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 3 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 4 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 5 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 6 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 7 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 8 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.029 9 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.029 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.601 10 00:04:39.601 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.601 18:54:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:39.601 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.601 18:54:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.986 18:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.986 18:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:40.986 18:54:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:40.986 18:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.986 18:54:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.557 18:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.557 18:54:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:41.557 18:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.557 18:54:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.503 18:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.503 18:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:42.503 18:54:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:42.503 18:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.503 18:54:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.072 18:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.072 00:04:43.072 real 0m4.226s 00:04:43.072 user 0m0.027s 00:04:43.072 sys 0m0.006s 00:04:43.072 18:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.072 18:55:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.072 ************************************ 00:04:43.072 END TEST scheduler_create_thread 00:04:43.072 ************************************ 00:04:43.331 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:43.331 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2707471 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2707471 ']' 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2707471 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2707471 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2707471' 00:04:43.331 killing process with pid 2707471 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2707471 00:04:43.331 18:55:00 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2707471 00:04:43.331 [2024-11-26 18:55:00.528686] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:43.590 00:04:43.590 real 0m5.837s 00:04:43.590 user 0m12.898s 00:04:43.590 sys 0m0.418s 00:04:43.590 18:55:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.590 18:55:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 ************************************ 00:04:43.590 END TEST event_scheduler 00:04:43.590 ************************************ 00:04:43.590 18:55:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:43.590 18:55:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:43.590 18:55:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.590 18:55:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.590 18:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 ************************************ 00:04:43.590 START TEST app_repeat 00:04:43.590 ************************************ 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2708542 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2708542' 00:04:43.590 Process app_repeat pid: 2708542 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:43.590 spdk_app_start Round 0 00:04:43.590 18:55:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2708542 /var/tmp/spdk-nbd.sock 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2708542 ']' 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.590 18:55:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.850 [2024-11-26 18:55:00.808080] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:04:43.850 [2024-11-26 18:55:00.808138] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708542 ] 00:04:43.850 [2024-11-26 18:55:00.894826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.850 [2024-11-26 18:55:00.927021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.850 [2024-11-26 18:55:00.927021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.850 18:55:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.850 18:55:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.850 18:55:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.112 Malloc0 00:04:44.112 18:55:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.372 Malloc1 00:04:44.372 18:55:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.372 18:55:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.632 /dev/nbd0 00:04:44.632 18:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.632 18:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.632 1+0 records in 00:04:44.632 1+0 records out 00:04:44.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242134 s, 16.9 MB/s 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.632 18:55:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.632 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.632 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.632 18:55:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:44.632 /dev/nbd1 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.891 1+0 records in 00:04:44.891 1+0 records out 00:04:44.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285459 s, 14.3 MB/s 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.891 18:55:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.891 18:55:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.891 18:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:44.891 { 00:04:44.891 "nbd_device": "/dev/nbd0", 00:04:44.891 "bdev_name": "Malloc0" 00:04:44.891 }, 00:04:44.891 { 00:04:44.891 "nbd_device": "/dev/nbd1", 00:04:44.891 "bdev_name": "Malloc1" 00:04:44.891 } 00:04:44.891 ]' 00:04:44.891 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:44.891 { 00:04:44.891 "nbd_device": "/dev/nbd0", 00:04:44.891 "bdev_name": "Malloc0" 00:04:44.891 }, 00:04:44.891 { 00:04:44.891 "nbd_device": "/dev/nbd1", 00:04:44.891 "bdev_name": "Malloc1" 00:04:44.891 } 00:04:44.891 ]' 00:04:44.891 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.151 /dev/nbd1' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.151 /dev/nbd1' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.151 256+0 records in 00:04:45.151 256+0 records out 00:04:45.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118987 s, 88.1 MB/s 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.151 256+0 records in 00:04:45.151 256+0 records out 00:04:45.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119739 s, 87.6 MB/s 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.151 256+0 records in 00:04:45.151 256+0 records out 00:04:45.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129259 s, 81.1 MB/s 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.151 18:55:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.411 18:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:45.671 18:55:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:45.671 18:55:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:45.932 18:55:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:45.932 [2024-11-26 18:55:03.085890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.932 [2024-11-26 18:55:03.115524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.932 [2024-11-26 18:55:03.115525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.192 [2024-11-26 18:55:03.144693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.192 [2024-11-26 18:55:03.144723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:49.494 spdk_app_start Round 1 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2708542 /var/tmp/spdk-nbd.sock 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2708542 ']' 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.494 18:55:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.494 Malloc0 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.494 Malloc1 00:04:49.494 18:55:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.494 18:55:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.755 /dev/nbd0 00:04:49.755 18:55:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.755 18:55:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.755 1+0 records in 00:04:49.755 1+0 records out 00:04:49.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275988 s, 14.8 MB/s 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:49.755 18:55:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:49.755 18:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.755 18:55:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.755 18:55:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.016 /dev/nbd1 00:04:50.016 18:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.016 18:55:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.016 18:55:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.017 1+0 records in 00:04:50.017 1+0 records out 00:04:50.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288097 s, 14.2 MB/s 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.017 18:55:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.017 18:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.017 { 00:04:50.017 "nbd_device": "/dev/nbd0", 00:04:50.017 "bdev_name": "Malloc0" 00:04:50.017 }, 00:04:50.017 { 00:04:50.017 "nbd_device": "/dev/nbd1", 00:04:50.017 "bdev_name": "Malloc1" 00:04:50.017 } 00:04:50.017 ]' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.278 { 00:04:50.278 "nbd_device": "/dev/nbd0", 00:04:50.278 "bdev_name": "Malloc0" 00:04:50.278 }, 00:04:50.278 { 00:04:50.278 "nbd_device": "/dev/nbd1", 00:04:50.278 "bdev_name": "Malloc1" 00:04:50.278 } 00:04:50.278 ]' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.278 /dev/nbd1' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.278 /dev/nbd1' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.278 256+0 records in 00:04:50.278 256+0 records out 00:04:50.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011899 s, 88.1 MB/s 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.278 256+0 records in 00:04:50.278 256+0 records out 00:04:50.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119393 s, 87.8 MB/s 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.278 256+0 records in 00:04:50.278 256+0 records out 00:04:50.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129574 s, 80.9 MB/s 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.278 18:55:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.540 18:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.801 18:55:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.801 18:55:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.062 18:55:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.062 [2024-11-26 18:55:08.240801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.062 [2024-11-26 18:55:08.269623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.062 [2024-11-26 18:55:08.269623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.322 [2024-11-26 18:55:08.299356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:51.322 [2024-11-26 18:55:08.299389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:54.620 spdk_app_start Round 2 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2708542 /var/tmp/spdk-nbd.sock 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2708542 ']' 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.620 18:55:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.620 Malloc0 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.620 Malloc1 00:04:54.620 18:55:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.620 18:55:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.882 /dev/nbd0 00:04:54.882 18:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.882 18:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.882 1+0 records in 00:04:54.882 1+0 records out 00:04:54.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027648 s, 14.8 MB/s 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.882 18:55:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.882 18:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.882 18:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.882 18:55:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.144 /dev/nbd1 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.144 1+0 records in 00:04:55.144 1+0 records out 00:04:55.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280255 s, 14.6 MB/s 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.144 18:55:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.144 18:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.406 { 00:04:55.406 "nbd_device": "/dev/nbd0", 00:04:55.406 "bdev_name": "Malloc0" 00:04:55.406 }, 00:04:55.406 { 00:04:55.406 "nbd_device": "/dev/nbd1", 00:04:55.406 "bdev_name": "Malloc1" 00:04:55.406 } 00:04:55.406 ]' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.406 { 00:04:55.406 "nbd_device": "/dev/nbd0", 00:04:55.406 "bdev_name": "Malloc0" 00:04:55.406 }, 00:04:55.406 { 00:04:55.406 "nbd_device": "/dev/nbd1", 00:04:55.406 "bdev_name": "Malloc1" 00:04:55.406 } 00:04:55.406 ]' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.406 /dev/nbd1' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.406 /dev/nbd1' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.406 18:55:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.407 256+0 records in 00:04:55.407 256+0 records out 00:04:55.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127448 s, 82.3 MB/s 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.407 256+0 records in 00:04:55.407 256+0 records out 00:04:55.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122969 s, 85.3 MB/s 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.407 256+0 records in 00:04:55.407 256+0 records out 00:04:55.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128396 s, 81.7 MB/s 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.407 18:55:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.668 18:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.929 18:55:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.929 18:55:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.190 18:55:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.190 [2024-11-26 18:55:13.370527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.190 [2024-11-26 18:55:13.399474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.190 [2024-11-26 18:55:13.399474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.451 [2024-11-26 18:55:13.428748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.451 [2024-11-26 18:55:13.428779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.751 18:55:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2708542 /var/tmp/spdk-nbd.sock 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2708542 ']' 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.751 18:55:16 event.app_repeat -- event/event.sh@39 -- # killprocess 2708542 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2708542 ']' 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2708542 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708542 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708542' 00:04:59.751 killing process with pid 2708542 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2708542 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2708542 00:04:59.751 spdk_app_start is called in Round 0. 00:04:59.751 Shutdown signal received, stop current app iteration 00:04:59.751 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:04:59.751 spdk_app_start is called in Round 1. 00:04:59.751 Shutdown signal received, stop current app iteration 00:04:59.751 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:04:59.751 spdk_app_start is called in Round 2. 00:04:59.751 Shutdown signal received, stop current app iteration 00:04:59.751 Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 reinitialization... 00:04:59.751 spdk_app_start is called in Round 3. 00:04:59.751 Shutdown signal received, stop current app iteration 00:04:59.751 18:55:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.751 18:55:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.751 00:04:59.751 real 0m15.856s 00:04:59.751 user 0m34.872s 00:04:59.751 sys 0m2.251s 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.751 18:55:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.751 ************************************ 00:04:59.751 END TEST app_repeat 00:04:59.751 ************************************ 00:04:59.751 18:55:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.751 18:55:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.751 18:55:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.751 18:55:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.751 18:55:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.751 ************************************ 00:04:59.751 START TEST cpu_locks 00:04:59.751 ************************************ 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.751 * Looking for test storage... 00:04:59.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.751 18:55:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.751 --rc genhtml_branch_coverage=1 00:04:59.751 --rc genhtml_function_coverage=1 00:04:59.751 --rc genhtml_legend=1 00:04:59.751 --rc geninfo_all_blocks=1 00:04:59.751 --rc geninfo_unexecuted_blocks=1 00:04:59.751 00:04:59.751 ' 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.751 --rc genhtml_branch_coverage=1 00:04:59.751 --rc genhtml_function_coverage=1 00:04:59.751 --rc genhtml_legend=1 00:04:59.751 --rc geninfo_all_blocks=1 00:04:59.751 --rc geninfo_unexecuted_blocks=1 00:04:59.751 00:04:59.751 ' 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.751 --rc genhtml_branch_coverage=1 00:04:59.751 --rc genhtml_function_coverage=1 00:04:59.751 --rc genhtml_legend=1 00:04:59.751 --rc geninfo_all_blocks=1 00:04:59.751 --rc geninfo_unexecuted_blocks=1 00:04:59.751 00:04:59.751 ' 00:04:59.751 18:55:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.751 --rc genhtml_branch_coverage=1 00:04:59.751 --rc genhtml_function_coverage=1 00:04:59.751 --rc genhtml_legend=1 00:04:59.751 --rc geninfo_all_blocks=1 00:04:59.751 --rc geninfo_unexecuted_blocks=1 00:04:59.751 00:04:59.751 ' 00:04:59.751 18:55:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.751 18:55:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.751 18:55:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.751 18:55:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.752 18:55:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.752 18:55:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.752 18:55:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.752 ************************************ 00:04:59.752 START TEST default_locks 00:04:59.752 ************************************ 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2712084 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2712084 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2712084 ']' 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.752 18:55:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.012 [2024-11-26 18:55:17.013249] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:00.012 [2024-11-26 18:55:17.013320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712084 ] 00:05:00.012 [2024-11-26 18:55:17.100698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.012 [2024-11-26 18:55:17.135358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.955 18:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.955 18:55:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:00.955 18:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2712084 00:05:00.955 18:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2712084 00:05:00.955 18:55:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.955 lslocks: write error 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2712084 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2712084 ']' 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2712084 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.955 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712084' 00:05:01.217 killing process with pid 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2712084 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2712084 ']' 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2712084) - No such process 00:05:01.217 ERROR: process (pid: 2712084) is no longer running 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.217 00:05:01.217 real 0m1.414s 00:05:01.217 user 0m1.511s 00:05:01.217 sys 0m0.504s 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.217 18:55:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.217 ************************************ 00:05:01.217 END TEST default_locks 00:05:01.217 ************************************ 00:05:01.217 18:55:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:01.217 18:55:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.217 18:55:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.217 18:55:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.478 ************************************ 00:05:01.478 START TEST default_locks_via_rpc 00:05:01.478 ************************************ 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2712347 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2712347 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2712347 ']' 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.478 18:55:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.478 [2024-11-26 18:55:18.513323] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:01.478 [2024-11-26 18:55:18.513379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712347 ] 00:05:01.478 [2024-11-26 18:55:18.596243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.478 [2024-11-26 18:55:18.627220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2712347 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2712347 00:05:02.420 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2712347 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2712347 ']' 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2712347 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712347 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712347' 00:05:02.681 killing process with pid 2712347 00:05:02.681 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2712347 00:05:02.682 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2712347 00:05:02.943 00:05:02.943 real 0m1.556s 00:05:02.943 user 0m1.678s 00:05:02.943 sys 0m0.535s 00:05:02.943 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.943 18:55:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.943 ************************************ 00:05:02.943 END TEST default_locks_via_rpc 00:05:02.943 ************************************ 00:05:02.943 18:55:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:02.943 18:55:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.943 18:55:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.943 18:55:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.943 ************************************ 00:05:02.943 START TEST non_locking_app_on_locked_coremask 00:05:02.943 ************************************ 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2712686 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2712686 /var/tmp/spdk.sock 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2712686 ']' 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.943 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.943 [2024-11-26 18:55:20.132398] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:02.943 [2024-11-26 18:55:20.132460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712686 ] 00:05:03.204 [2024-11-26 18:55:20.220283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.204 [2024-11-26 18:55:20.262132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2712872 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2712872 /var/tmp/spdk2.sock 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2712872 ']' 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.776 18:55:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 [2024-11-26 18:55:20.999719] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:04.037 [2024-11-26 18:55:20.999772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712872 ] 00:05:04.037 [2024-11-26 18:55:21.089214] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.037 [2024-11-26 18:55:21.089240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.037 [2024-11-26 18:55:21.151178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.608 18:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.608 18:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.608 18:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2712686 00:05:04.608 18:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2712686 00:05:04.608 18:55:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.179 lslocks: write error 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2712686 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2712686 ']' 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2712686 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712686 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712686' 00:05:05.179 killing process with pid 2712686 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2712686 00:05:05.179 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2712686 00:05:05.440 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2712872 00:05:05.440 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2712872 ']' 00:05:05.440 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2712872 00:05:05.440 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2712872 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2712872' 00:05:05.701 killing process with pid 2712872 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2712872 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2712872 00:05:05.701 00:05:05.701 real 0m2.823s 00:05:05.701 user 0m3.157s 00:05:05.701 sys 0m0.876s 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.701 18:55:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.701 ************************************ 00:05:05.701 END TEST non_locking_app_on_locked_coremask 00:05:05.701 ************************************ 00:05:05.962 18:55:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.962 18:55:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.962 18:55:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.962 18:55:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.962 ************************************ 00:05:05.962 START TEST locking_app_on_unlocked_coremask 00:05:05.962 ************************************ 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2713251 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2713251 /var/tmp/spdk.sock 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2713251 ']' 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.962 18:55:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.962 [2024-11-26 18:55:23.034315] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:05.962 [2024-11-26 18:55:23.034367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713251 ] 00:05:05.962 [2024-11-26 18:55:23.117629] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.962 [2024-11-26 18:55:23.117650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.962 [2024-11-26 18:55:23.148328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2713582 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2713582 /var/tmp/spdk2.sock 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2713582 ']' 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.906 18:55:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.906 [2024-11-26 18:55:23.871670] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:06.906 [2024-11-26 18:55:23.871726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713582 ] 00:05:06.906 [2024-11-26 18:55:23.960009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.906 [2024-11-26 18:55:24.018098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.612 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.612 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.612 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2713582 00:05:07.612 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2713582 00:05:07.612 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.886 lslocks: write error 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2713251 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2713251 ']' 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2713251 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713251 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713251' 00:05:07.886 killing process with pid 2713251 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2713251 00:05:07.886 18:55:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2713251 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2713582 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2713582 ']' 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2713582 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713582 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713582' 00:05:08.456 killing process with pid 2713582 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2713582 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2713582 00:05:08.456 00:05:08.456 real 0m2.635s 00:05:08.456 user 0m2.972s 00:05:08.456 sys 0m0.755s 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.456 18:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.456 ************************************ 00:05:08.456 END TEST locking_app_on_unlocked_coremask 00:05:08.456 ************************************ 00:05:08.456 18:55:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:08.456 18:55:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.456 18:55:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.456 18:55:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.717 ************************************ 00:05:08.717 START TEST locking_app_on_locked_coremask 00:05:08.717 ************************************ 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2713960 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2713960 /var/tmp/spdk.sock 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2713960 ']' 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.717 18:55:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.717 [2024-11-26 18:55:25.744009] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:08.717 [2024-11-26 18:55:25.744065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713960 ] 00:05:08.717 [2024-11-26 18:55:25.829888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.717 [2024-11-26 18:55:25.862182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2713979 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2713979 /var/tmp/spdk2.sock 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2713979 /var/tmp/spdk2.sock 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2713979 /var/tmp/spdk2.sock 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2713979 ']' 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.659 18:55:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.659 [2024-11-26 18:55:26.563479] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:09.659 [2024-11-26 18:55:26.563530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713979 ] 00:05:09.659 [2024-11-26 18:55:26.649059] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2713960 has claimed it. 00:05:09.659 [2024-11-26 18:55:26.649092] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2713979) - No such process 00:05:10.248 ERROR: process (pid: 2713979) is no longer running 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2713960 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2713960 00:05:10.248 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.508 lslocks: write error 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2713960 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2713960 ']' 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2713960 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.508 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713960 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713960' 00:05:10.769 killing process with pid 2713960 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2713960 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2713960 00:05:10.769 00:05:10.769 real 0m2.230s 00:05:10.769 user 0m2.509s 00:05:10.769 sys 0m0.625s 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.769 18:55:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.769 ************************************ 00:05:10.769 END TEST locking_app_on_locked_coremask 00:05:10.769 ************************************ 00:05:10.769 18:55:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:10.769 18:55:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.769 18:55:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.769 18:55:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.030 ************************************ 00:05:11.030 START TEST locking_overlapped_coremask 00:05:11.030 ************************************ 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2714339 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2714339 /var/tmp/spdk.sock 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2714339 ']' 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.030 18:55:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.030 [2024-11-26 18:55:28.053535] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:11.030 [2024-11-26 18:55:28.053584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714339 ] 00:05:11.030 [2024-11-26 18:55:28.138022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.030 [2024-11-26 18:55:28.170054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.030 [2024-11-26 18:55:28.170205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.030 [2024-11-26 18:55:28.170380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2714655 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2714655 /var/tmp/spdk2.sock 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2714655 /var/tmp/spdk2.sock 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2714655 /var/tmp/spdk2.sock 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2714655 ']' 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.972 18:55:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.972 [2024-11-26 18:55:28.903740] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:11.972 [2024-11-26 18:55:28.903795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714655 ] 00:05:11.972 [2024-11-26 18:55:29.014433] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2714339 has claimed it. 00:05:11.972 [2024-11-26 18:55:29.014473] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2714655) - No such process 00:05:12.542 ERROR: process (pid: 2714655) is no longer running 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2714339 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2714339 ']' 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2714339 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2714339 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2714339' 00:05:12.542 killing process with pid 2714339 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2714339 00:05:12.542 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2714339 00:05:12.804 00:05:12.804 real 0m1.778s 00:05:12.804 user 0m5.171s 00:05:12.804 sys 0m0.373s 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.804 ************************************ 00:05:12.804 END TEST locking_overlapped_coremask 00:05:12.804 ************************************ 00:05:12.804 18:55:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:12.804 18:55:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.804 18:55:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.804 18:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.804 ************************************ 00:05:12.804 START TEST locking_overlapped_coremask_via_rpc 00:05:12.804 ************************************ 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2714713 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2714713 /var/tmp/spdk.sock 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2714713 ']' 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.804 18:55:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.804 [2024-11-26 18:55:29.904788] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:12.804 [2024-11-26 18:55:29.904839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2714713 ] 00:05:12.804 [2024-11-26 18:55:29.989641] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.804 [2024-11-26 18:55:29.989674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.065 [2024-11-26 18:55:30.030866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.065 [2024-11-26 18:55:30.031014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.065 [2024-11-26 18:55:30.031015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2715046 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2715046 /var/tmp/spdk2.sock 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2715046 ']' 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.635 18:55:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.635 [2024-11-26 18:55:30.761952] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:13.635 [2024-11-26 18:55:30.762007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715046 ] 00:05:13.895 [2024-11-26 18:55:30.849231] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.895 [2024-11-26 18:55:30.849255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.895 [2024-11-26 18:55:30.912199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.895 [2024-11-26 18:55:30.912283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.895 [2024-11-26 18:55:30.912285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.467 [2024-11-26 18:55:31.565221] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2714713 has claimed it. 00:05:14.467 request: 00:05:14.467 { 00:05:14.467 "method": "framework_enable_cpumask_locks", 00:05:14.467 "req_id": 1 00:05:14.467 } 00:05:14.467 Got JSON-RPC error response 00:05:14.467 response: 00:05:14.467 { 00:05:14.467 "code": -32603, 00:05:14.467 "message": "Failed to claim CPU core: 2" 00:05:14.467 } 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2714713 /var/tmp/spdk.sock 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2714713 ']' 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.467 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2715046 /var/tmp/spdk2.sock 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2715046 ']' 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.728 00:05:14.728 real 0m2.085s 00:05:14.728 user 0m0.878s 00:05:14.728 sys 0m0.140s 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.728 18:55:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.728 ************************************ 00:05:14.728 END TEST locking_overlapped_coremask_via_rpc 00:05:14.728 ************************************ 00:05:14.989 18:55:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:14.989 18:55:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2714713 ]] 00:05:14.989 18:55:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2714713 00:05:14.989 18:55:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2714713 ']' 00:05:14.989 18:55:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2714713 00:05:14.989 18:55:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.989 18:55:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.989 18:55:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2714713 00:05:14.989 18:55:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.989 18:55:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.989 18:55:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2714713' 00:05:14.989 killing process with pid 2714713 00:05:14.989 18:55:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2714713 00:05:14.989 18:55:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2714713 00:05:15.249 18:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2715046 ]] 00:05:15.249 18:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2715046 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2715046 ']' 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2715046 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715046 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715046' 00:05:15.249 killing process with pid 2715046 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2715046 00:05:15.249 18:55:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2715046 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2714713 ]] 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2714713 00:05:15.510 18:55:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2714713 ']' 00:05:15.510 18:55:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2714713 00:05:15.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2714713) - No such process 00:05:15.510 18:55:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2714713 is not found' 00:05:15.510 Process with pid 2714713 is not found 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2715046 ]] 00:05:15.510 18:55:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2715046 00:05:15.510 18:55:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2715046 ']' 00:05:15.510 18:55:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2715046 00:05:15.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2715046) - No such process 00:05:15.511 18:55:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2715046 is not found' 00:05:15.511 Process with pid 2715046 is not found 00:05:15.511 18:55:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.511 00:05:15.511 real 0m15.790s 00:05:15.511 user 0m27.949s 00:05:15.511 sys 0m4.780s 00:05:15.511 18:55:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.511 18:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 ************************************ 00:05:15.511 END TEST cpu_locks 00:05:15.511 ************************************ 00:05:15.511 00:05:15.511 real 0m41.702s 00:05:15.511 user 1m22.261s 00:05:15.511 sys 0m8.152s 00:05:15.511 18:55:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.511 18:55:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 ************************************ 00:05:15.511 END TEST event 00:05:15.511 ************************************ 00:05:15.511 18:55:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:15.511 18:55:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.511 18:55:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.511 18:55:32 -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 ************************************ 00:05:15.511 START TEST thread 00:05:15.511 ************************************ 00:05:15.511 18:55:32 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:15.511 * Looking for test storage... 00:05:15.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:15.511 18:55:32 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.511 18:55:32 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.772 18:55:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.772 18:55:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.772 18:55:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.772 18:55:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.772 18:55:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.772 18:55:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.772 18:55:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.772 18:55:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.772 18:55:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.772 18:55:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.772 18:55:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.772 18:55:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:15.772 18:55:32 thread -- scripts/common.sh@345 -- # : 1 00:05:15.772 18:55:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.772 18:55:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.772 18:55:32 thread -- scripts/common.sh@365 -- # decimal 1 00:05:15.772 18:55:32 thread -- scripts/common.sh@353 -- # local d=1 00:05:15.772 18:55:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.772 18:55:32 thread -- scripts/common.sh@355 -- # echo 1 00:05:15.772 18:55:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.772 18:55:32 thread -- scripts/common.sh@366 -- # decimal 2 00:05:15.772 18:55:32 thread -- scripts/common.sh@353 -- # local d=2 00:05:15.772 18:55:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.772 18:55:32 thread -- scripts/common.sh@355 -- # echo 2 00:05:15.772 18:55:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.772 18:55:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.772 18:55:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.772 18:55:32 thread -- scripts/common.sh@368 -- # return 0 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.772 --rc genhtml_branch_coverage=1 00:05:15.772 --rc genhtml_function_coverage=1 00:05:15.772 --rc genhtml_legend=1 00:05:15.772 --rc geninfo_all_blocks=1 00:05:15.772 --rc geninfo_unexecuted_blocks=1 00:05:15.772 00:05:15.772 ' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.772 --rc genhtml_branch_coverage=1 00:05:15.772 --rc genhtml_function_coverage=1 00:05:15.772 --rc genhtml_legend=1 00:05:15.772 --rc geninfo_all_blocks=1 00:05:15.772 --rc geninfo_unexecuted_blocks=1 00:05:15.772 00:05:15.772 ' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.772 --rc genhtml_branch_coverage=1 00:05:15.772 --rc genhtml_function_coverage=1 00:05:15.772 --rc genhtml_legend=1 00:05:15.772 --rc geninfo_all_blocks=1 00:05:15.772 --rc geninfo_unexecuted_blocks=1 00:05:15.772 00:05:15.772 ' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.772 --rc genhtml_branch_coverage=1 00:05:15.772 --rc genhtml_function_coverage=1 00:05:15.772 --rc genhtml_legend=1 00:05:15.772 --rc geninfo_all_blocks=1 00:05:15.772 --rc geninfo_unexecuted_blocks=1 00:05:15.772 00:05:15.772 ' 00:05:15.772 18:55:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.772 18:55:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.772 ************************************ 00:05:15.772 START TEST thread_poller_perf 00:05:15.772 ************************************ 00:05:15.772 18:55:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.772 [2024-11-26 18:55:32.877059] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:15.772 [2024-11-26 18:55:32.877175] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715497 ] 00:05:15.772 [2024-11-26 18:55:32.972997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.033 [2024-11-26 18:55:33.004652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.033 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.974 [2024-11-26T17:55:34.187Z] ====================================== 00:05:16.974 [2024-11-26T17:55:34.187Z] busy:2409593830 (cyc) 00:05:16.974 [2024-11-26T17:55:34.187Z] total_run_count: 418000 00:05:16.974 [2024-11-26T17:55:34.187Z] tsc_hz: 2400000000 (cyc) 00:05:16.974 [2024-11-26T17:55:34.187Z] ====================================== 00:05:16.974 [2024-11-26T17:55:34.187Z] poller_cost: 5764 (cyc), 2401 (nsec) 00:05:16.974 00:05:16.974 real 0m1.184s 00:05:16.974 user 0m1.099s 00:05:16.974 sys 0m0.081s 00:05:16.974 18:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.974 18:55:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.974 ************************************ 00:05:16.974 END TEST thread_poller_perf 00:05:16.974 ************************************ 00:05:16.974 18:55:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.974 18:55:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:16.974 18:55:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.974 18:55:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.974 ************************************ 00:05:16.974 START TEST thread_poller_perf 00:05:16.974 ************************************ 00:05:16.974 18:55:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.974 [2024-11-26 18:55:34.137448] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:16.974 [2024-11-26 18:55:34.137544] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715846 ] 00:05:17.235 [2024-11-26 18:55:34.224675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.235 [2024-11-26 18:55:34.254365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.235 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:18.175 [2024-11-26T17:55:35.388Z] ====================================== 00:05:18.175 [2024-11-26T17:55:35.388Z] busy:2401321410 (cyc) 00:05:18.175 [2024-11-26T17:55:35.388Z] total_run_count: 5556000 00:05:18.175 [2024-11-26T17:55:35.388Z] tsc_hz: 2400000000 (cyc) 00:05:18.175 [2024-11-26T17:55:35.388Z] ====================================== 00:05:18.175 [2024-11-26T17:55:35.388Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:18.175 00:05:18.175 real 0m1.165s 00:05:18.175 user 0m1.084s 00:05:18.175 sys 0m0.078s 00:05:18.175 18:55:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.175 18:55:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.175 ************************************ 00:05:18.175 END TEST thread_poller_perf 00:05:18.175 ************************************ 00:05:18.175 18:55:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:18.175 00:05:18.175 real 0m2.702s 00:05:18.175 user 0m2.349s 00:05:18.175 sys 0m0.367s 00:05:18.175 18:55:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.175 18:55:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.175 ************************************ 00:05:18.175 END TEST thread 00:05:18.175 ************************************ 00:05:18.175 18:55:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:18.175 18:55:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:18.175 18:55:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.175 18:55:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.175 18:55:35 -- common/autotest_common.sh@10 -- # set +x 00:05:18.436 ************************************ 00:05:18.436 START TEST app_cmdline 00:05:18.436 ************************************ 00:05:18.436 18:55:35 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:18.436 * Looking for test storage... 00:05:18.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:18.436 18:55:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.436 18:55:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.436 18:55:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.436 18:55:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.436 18:55:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.437 18:55:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.437 --rc genhtml_branch_coverage=1 00:05:18.437 --rc genhtml_function_coverage=1 00:05:18.437 --rc genhtml_legend=1 00:05:18.437 --rc geninfo_all_blocks=1 00:05:18.437 --rc geninfo_unexecuted_blocks=1 00:05:18.437 00:05:18.437 ' 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.437 --rc genhtml_branch_coverage=1 00:05:18.437 --rc genhtml_function_coverage=1 00:05:18.437 --rc genhtml_legend=1 00:05:18.437 --rc geninfo_all_blocks=1 00:05:18.437 --rc geninfo_unexecuted_blocks=1 00:05:18.437 00:05:18.437 ' 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.437 --rc genhtml_branch_coverage=1 00:05:18.437 --rc genhtml_function_coverage=1 00:05:18.437 --rc genhtml_legend=1 00:05:18.437 --rc geninfo_all_blocks=1 00:05:18.437 --rc geninfo_unexecuted_blocks=1 00:05:18.437 00:05:18.437 ' 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.437 --rc genhtml_branch_coverage=1 00:05:18.437 --rc genhtml_function_coverage=1 00:05:18.437 --rc genhtml_legend=1 00:05:18.437 --rc geninfo_all_blocks=1 00:05:18.437 --rc geninfo_unexecuted_blocks=1 00:05:18.437 00:05:18.437 ' 00:05:18.437 18:55:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:18.437 18:55:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2716222 00:05:18.437 18:55:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2716222 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2716222 ']' 00:05:18.437 18:55:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.437 18:55:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.697 [2024-11-26 18:55:35.662690] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:18.697 [2024-11-26 18:55:35.662762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716222 ] 00:05:18.697 [2024-11-26 18:55:35.748261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.697 [2024-11-26 18:55:35.780099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.267 18:55:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.267 18:55:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:19.267 18:55:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:19.529 { 00:05:19.529 "version": "SPDK v25.01-pre git sha1 afdec00e1", 00:05:19.529 "fields": { 00:05:19.529 "major": 25, 00:05:19.529 "minor": 1, 00:05:19.529 "patch": 0, 00:05:19.529 "suffix": "-pre", 00:05:19.529 "commit": "afdec00e1" 00:05:19.529 } 00:05:19.529 } 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:19.529 18:55:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:19.529 18:55:36 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:19.790 request: 00:05:19.790 { 00:05:19.790 "method": "env_dpdk_get_mem_stats", 00:05:19.790 "req_id": 1 00:05:19.790 } 00:05:19.790 Got JSON-RPC error response 00:05:19.790 response: 00:05:19.790 { 00:05:19.790 "code": -32601, 00:05:19.790 "message": "Method not found" 00:05:19.790 } 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.790 18:55:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2716222 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2716222 ']' 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2716222 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716222 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716222' 00:05:19.790 killing process with pid 2716222 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 2716222 00:05:19.790 18:55:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 2716222 00:05:20.051 00:05:20.051 real 0m1.712s 00:05:20.051 user 0m2.063s 00:05:20.051 sys 0m0.448s 00:05:20.051 18:55:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.051 18:55:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:20.051 ************************************ 00:05:20.051 END TEST app_cmdline 00:05:20.051 ************************************ 00:05:20.051 18:55:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:20.051 18:55:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.051 18:55:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.051 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.051 ************************************ 00:05:20.051 START TEST version 00:05:20.051 ************************************ 00:05:20.051 18:55:37 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:20.312 * Looking for test storage... 00:05:20.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.312 18:55:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.312 18:55:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.312 18:55:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.312 18:55:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.312 18:55:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.312 18:55:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.312 18:55:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.312 18:55:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.312 18:55:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.312 18:55:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.312 18:55:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.312 18:55:37 version -- scripts/common.sh@344 -- # case "$op" in 00:05:20.312 18:55:37 version -- scripts/common.sh@345 -- # : 1 00:05:20.312 18:55:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.312 18:55:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.312 18:55:37 version -- scripts/common.sh@365 -- # decimal 1 00:05:20.312 18:55:37 version -- scripts/common.sh@353 -- # local d=1 00:05:20.312 18:55:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.312 18:55:37 version -- scripts/common.sh@355 -- # echo 1 00:05:20.312 18:55:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.312 18:55:37 version -- scripts/common.sh@366 -- # decimal 2 00:05:20.312 18:55:37 version -- scripts/common.sh@353 -- # local d=2 00:05:20.312 18:55:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.312 18:55:37 version -- scripts/common.sh@355 -- # echo 2 00:05:20.312 18:55:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.312 18:55:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.312 18:55:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.312 18:55:37 version -- scripts/common.sh@368 -- # return 0 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.312 --rc genhtml_branch_coverage=1 00:05:20.312 --rc genhtml_function_coverage=1 00:05:20.312 --rc genhtml_legend=1 00:05:20.312 --rc geninfo_all_blocks=1 00:05:20.312 --rc geninfo_unexecuted_blocks=1 00:05:20.312 00:05:20.312 ' 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.312 --rc genhtml_branch_coverage=1 00:05:20.312 --rc genhtml_function_coverage=1 00:05:20.312 --rc genhtml_legend=1 00:05:20.312 --rc geninfo_all_blocks=1 00:05:20.312 --rc geninfo_unexecuted_blocks=1 00:05:20.312 00:05:20.312 ' 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.312 --rc genhtml_branch_coverage=1 00:05:20.312 --rc genhtml_function_coverage=1 00:05:20.312 --rc genhtml_legend=1 00:05:20.312 --rc geninfo_all_blocks=1 00:05:20.312 --rc geninfo_unexecuted_blocks=1 00:05:20.312 00:05:20.312 ' 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.312 --rc genhtml_branch_coverage=1 00:05:20.312 --rc genhtml_function_coverage=1 00:05:20.312 --rc genhtml_legend=1 00:05:20.312 --rc geninfo_all_blocks=1 00:05:20.312 --rc geninfo_unexecuted_blocks=1 00:05:20.312 00:05:20.312 ' 00:05:20.312 18:55:37 version -- app/version.sh@17 -- # get_header_version major 00:05:20.312 18:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # cut -f2 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.312 18:55:37 version -- app/version.sh@17 -- # major=25 00:05:20.312 18:55:37 version -- app/version.sh@18 -- # get_header_version minor 00:05:20.312 18:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # cut -f2 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.312 18:55:37 version -- app/version.sh@18 -- # minor=1 00:05:20.312 18:55:37 version -- app/version.sh@19 -- # get_header_version patch 00:05:20.312 18:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # cut -f2 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.312 18:55:37 version -- app/version.sh@19 -- # patch=0 00:05:20.312 18:55:37 version -- app/version.sh@20 -- # get_header_version suffix 00:05:20.312 18:55:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # cut -f2 00:05:20.312 18:55:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:20.312 18:55:37 version -- app/version.sh@20 -- # suffix=-pre 00:05:20.312 18:55:37 version -- app/version.sh@22 -- # version=25.1 00:05:20.312 18:55:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:20.312 18:55:37 version -- app/version.sh@28 -- # version=25.1rc0 00:05:20.312 18:55:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:20.312 18:55:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:20.312 18:55:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:20.312 18:55:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:20.312 00:05:20.312 real 0m0.273s 00:05:20.312 user 0m0.161s 00:05:20.312 sys 0m0.161s 00:05:20.312 18:55:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.312 18:55:37 version -- common/autotest_common.sh@10 -- # set +x 00:05:20.312 ************************************ 00:05:20.312 END TEST version 00:05:20.312 ************************************ 00:05:20.312 18:55:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:20.312 18:55:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:20.313 18:55:37 -- spdk/autotest.sh@194 -- # uname -s 00:05:20.313 18:55:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:20.313 18:55:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:20.313 18:55:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:20.313 18:55:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:20.313 18:55:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:20.313 18:55:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:20.313 18:55:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.313 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.575 18:55:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:20.575 18:55:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:20.575 18:55:37 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:20.575 18:55:37 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:20.575 18:55:37 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:20.575 18:55:37 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:20.575 18:55:37 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:20.575 18:55:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.575 18:55:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.575 18:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:20.575 ************************************ 00:05:20.575 START TEST nvmf_tcp 00:05:20.575 ************************************ 00:05:20.575 18:55:37 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:20.575 * Looking for test storage... 00:05:20.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:20.575 18:55:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.575 18:55:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.575 18:55:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.575 18:55:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.575 18:55:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:20.836 18:55:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.836 18:55:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.836 18:55:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.836 18:55:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.836 --rc genhtml_branch_coverage=1 00:05:20.836 --rc genhtml_function_coverage=1 00:05:20.836 --rc genhtml_legend=1 00:05:20.836 --rc geninfo_all_blocks=1 00:05:20.836 --rc geninfo_unexecuted_blocks=1 00:05:20.836 00:05:20.836 ' 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.836 --rc genhtml_branch_coverage=1 00:05:20.836 --rc genhtml_function_coverage=1 00:05:20.836 --rc genhtml_legend=1 00:05:20.836 --rc geninfo_all_blocks=1 00:05:20.836 --rc geninfo_unexecuted_blocks=1 00:05:20.836 00:05:20.836 ' 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.836 --rc genhtml_branch_coverage=1 00:05:20.836 --rc genhtml_function_coverage=1 00:05:20.836 --rc genhtml_legend=1 00:05:20.836 --rc geninfo_all_blocks=1 00:05:20.836 --rc geninfo_unexecuted_blocks=1 00:05:20.836 00:05:20.836 ' 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.836 --rc genhtml_branch_coverage=1 00:05:20.836 --rc genhtml_function_coverage=1 00:05:20.836 --rc genhtml_legend=1 00:05:20.836 --rc geninfo_all_blocks=1 00:05:20.836 --rc geninfo_unexecuted_blocks=1 00:05:20.836 00:05:20.836 ' 00:05:20.836 18:55:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:20.836 18:55:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:20.836 18:55:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.836 18:55:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.836 ************************************ 00:05:20.836 START TEST nvmf_target_core 00:05:20.836 ************************************ 00:05:20.836 18:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:20.836 * Looking for test storage... 00:05:20.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:20.836 18:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.836 18:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.836 18:55:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.836 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.836 --rc genhtml_branch_coverage=1 00:05:20.836 --rc genhtml_function_coverage=1 00:05:20.836 --rc genhtml_legend=1 00:05:20.836 --rc geninfo_all_blocks=1 00:05:20.836 --rc geninfo_unexecuted_blocks=1 00:05:20.837 00:05:20.837 ' 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.837 --rc genhtml_branch_coverage=1 00:05:20.837 --rc genhtml_function_coverage=1 00:05:20.837 --rc genhtml_legend=1 00:05:20.837 --rc geninfo_all_blocks=1 00:05:20.837 --rc geninfo_unexecuted_blocks=1 00:05:20.837 00:05:20.837 ' 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.837 --rc genhtml_branch_coverage=1 00:05:20.837 --rc genhtml_function_coverage=1 00:05:20.837 --rc genhtml_legend=1 00:05:20.837 --rc geninfo_all_blocks=1 00:05:20.837 --rc geninfo_unexecuted_blocks=1 00:05:20.837 00:05:20.837 ' 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.837 --rc genhtml_branch_coverage=1 00:05:20.837 --rc genhtml_function_coverage=1 00:05:20.837 --rc genhtml_legend=1 00:05:20.837 --rc geninfo_all_blocks=1 00:05:20.837 --rc geninfo_unexecuted_blocks=1 00:05:20.837 00:05:20.837 ' 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.837 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.097 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:21.098 ************************************ 00:05:21.098 START TEST nvmf_abort 00:05:21.098 ************************************ 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:21.098 * Looking for test storage... 00:05:21.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.098 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.359 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.359 --rc genhtml_branch_coverage=1 00:05:21.359 --rc genhtml_function_coverage=1 00:05:21.359 --rc genhtml_legend=1 00:05:21.359 --rc geninfo_all_blocks=1 00:05:21.359 --rc geninfo_unexecuted_blocks=1 00:05:21.360 00:05:21.360 ' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.360 --rc genhtml_branch_coverage=1 00:05:21.360 --rc genhtml_function_coverage=1 00:05:21.360 --rc genhtml_legend=1 00:05:21.360 --rc geninfo_all_blocks=1 00:05:21.360 --rc geninfo_unexecuted_blocks=1 00:05:21.360 00:05:21.360 ' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.360 --rc genhtml_branch_coverage=1 00:05:21.360 --rc genhtml_function_coverage=1 00:05:21.360 --rc genhtml_legend=1 00:05:21.360 --rc geninfo_all_blocks=1 00:05:21.360 --rc geninfo_unexecuted_blocks=1 00:05:21.360 00:05:21.360 ' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.360 --rc genhtml_branch_coverage=1 00:05:21.360 --rc genhtml_function_coverage=1 00:05:21.360 --rc genhtml_legend=1 00:05:21.360 --rc geninfo_all_blocks=1 00:05:21.360 --rc geninfo_unexecuted_blocks=1 00:05:21.360 00:05:21.360 ' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.360 18:55:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:29.497 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:29.497 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:29.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:29.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.497 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:29.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:05:29.498 00:05:29.498 --- 10.0.0.2 ping statistics --- 00:05:29.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.498 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:29.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:29.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:05:29.498 00:05:29.498 --- 10.0.0.1 ping statistics --- 00:05:29.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.498 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2720594 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2720594 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2720594 ']' 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.498 18:55:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.498 [2024-11-26 18:55:45.886051] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:29.498 [2024-11-26 18:55:45.886117] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:29.498 [2024-11-26 18:55:45.986014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.498 [2024-11-26 18:55:46.039828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:29.498 [2024-11-26 18:55:46.039882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:29.498 [2024-11-26 18:55:46.039891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.498 [2024-11-26 18:55:46.039898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.498 [2024-11-26 18:55:46.039904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:29.498 [2024-11-26 18:55:46.041722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.498 [2024-11-26 18:55:46.041883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.498 [2024-11-26 18:55:46.041885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.498 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.498 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:29.498 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.498 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.498 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 [2024-11-26 18:55:46.748140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 Malloc0 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 Delay0 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 [2024-11-26 18:55:46.833466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.760 18:55:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:30.021 [2024-11-26 18:55:46.984799] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:31.934 Initializing NVMe Controllers 00:05:31.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:31.934 controller IO queue size 128 less than required 00:05:31.934 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:31.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:31.934 Initialization complete. Launching workers. 00:05:31.934 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28483 00:05:31.934 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28544, failed to submit 62 00:05:31.934 success 28487, unsuccessful 57, failed 0 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:31.934 rmmod nvme_tcp 00:05:31.934 rmmod nvme_fabrics 00:05:31.934 rmmod nvme_keyring 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2720594 ']' 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2720594 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2720594 ']' 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2720594 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.934 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720594 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720594' 00:05:32.195 killing process with pid 2720594 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2720594 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2720594 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.195 18:55:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:34.740 00:05:34.740 real 0m13.255s 00:05:34.740 user 0m13.744s 00:05:34.740 sys 0m6.503s 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.740 ************************************ 00:05:34.740 END TEST nvmf_abort 00:05:34.740 ************************************ 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:34.740 ************************************ 00:05:34.740 START TEST nvmf_ns_hotplug_stress 00:05:34.740 ************************************ 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:34.740 * Looking for test storage... 00:05:34.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.740 --rc genhtml_branch_coverage=1 00:05:34.740 --rc genhtml_function_coverage=1 00:05:34.740 --rc genhtml_legend=1 00:05:34.740 --rc geninfo_all_blocks=1 00:05:34.740 --rc geninfo_unexecuted_blocks=1 00:05:34.740 00:05:34.740 ' 00:05:34.740 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.740 --rc genhtml_branch_coverage=1 00:05:34.740 --rc genhtml_function_coverage=1 00:05:34.740 --rc genhtml_legend=1 00:05:34.740 --rc geninfo_all_blocks=1 00:05:34.740 --rc geninfo_unexecuted_blocks=1 00:05:34.741 00:05:34.741 ' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.741 --rc genhtml_branch_coverage=1 00:05:34.741 --rc genhtml_function_coverage=1 00:05:34.741 --rc genhtml_legend=1 00:05:34.741 --rc geninfo_all_blocks=1 00:05:34.741 --rc geninfo_unexecuted_blocks=1 00:05:34.741 00:05:34.741 ' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.741 --rc genhtml_branch_coverage=1 00:05:34.741 --rc genhtml_function_coverage=1 00:05:34.741 --rc genhtml_legend=1 00:05:34.741 --rc geninfo_all_blocks=1 00:05:34.741 --rc geninfo_unexecuted_blocks=1 00:05:34.741 00:05:34.741 ' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:34.741 18:55:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:42.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.884 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:42.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:42.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:42.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.885 18:55:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:42.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:42.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:05:42.885 00:05:42.885 --- 10.0.0.2 ping statistics --- 00:05:42.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.885 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:42.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:42.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:05:42.885 00:05:42.885 --- 10.0.0.1 ping statistics --- 00:05:42.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:42.885 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2725454 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2725454 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2725454 ']' 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.885 18:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.885 [2024-11-26 18:55:59.279877] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:05:42.885 [2024-11-26 18:55:59.279950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.885 [2024-11-26 18:55:59.376206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.885 [2024-11-26 18:55:59.429730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.885 [2024-11-26 18:55:59.429782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.885 [2024-11-26 18:55:59.429790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.885 [2024-11-26 18:55:59.429797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.885 [2024-11-26 18:55:59.429804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.885 [2024-11-26 18:55:59.431632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.885 [2024-11-26 18:55:59.431795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.885 [2024-11-26 18:55:59.431795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:43.147 [2024-11-26 18:56:00.312992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.147 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:43.407 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.668 [2024-11-26 18:56:00.700129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.668 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:43.928 18:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:43.928 Malloc0 00:05:44.189 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.189 Delay0 00:05:44.189 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.451 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:44.713 NULL1 00:05:44.713 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:44.973 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2726151 00:05:44.973 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:44.973 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:44.973 18:56:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.973 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.234 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:45.234 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:45.496 true 00:05:45.496 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:45.496 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.757 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.757 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:45.757 18:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:46.018 true 00:05:46.018 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:46.018 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.279 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.279 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:46.279 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:46.540 true 00:05:46.540 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:46.540 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.800 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.800 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:46.800 18:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:47.060 true 00:05:47.060 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:47.060 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.321 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.322 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:47.322 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:47.583 true 00:05:47.583 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:47.583 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.845 18:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.107 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:48.107 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:48.107 true 00:05:48.107 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:48.107 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.367 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.628 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:48.628 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:48.628 true 00:05:48.628 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:48.628 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.889 18:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.149 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:49.149 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:49.149 true 00:05:49.149 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:49.149 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.410 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.670 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:49.670 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:49.670 true 00:05:49.930 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:49.930 18:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.930 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.190 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:50.190 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:50.451 true 00:05:50.451 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:50.451 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.451 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.712 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:50.712 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:50.972 true 00:05:50.972 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:50.972 18:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.972 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.233 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:51.233 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:51.493 true 00:05:51.493 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:51.493 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.493 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.754 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:51.754 18:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:52.014 true 00:05:52.014 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:52.014 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.274 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.274 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:52.274 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:52.535 true 00:05:52.535 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:52.535 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.796 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.796 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:52.796 18:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:53.056 true 00:05:53.056 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:53.056 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.316 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.316 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:53.316 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:53.576 true 00:05:53.576 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:53.576 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.836 18:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.836 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:53.836 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:54.097 true 00:05:54.097 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:54.097 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.358 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.358 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:54.358 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:54.619 true 00:05:54.619 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:54.619 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.879 18:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.139 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:55.139 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:55.139 true 00:05:55.139 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:55.139 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.398 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.658 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:55.658 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:55.658 true 00:05:55.917 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:55.917 18:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.917 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.175 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:56.175 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:56.175 true 00:05:56.435 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:56.435 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.435 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.694 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:56.694 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:56.694 true 00:05:56.954 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:56.954 18:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.954 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.261 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:57.261 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:57.261 true 00:05:57.562 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:57.562 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.562 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.825 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:57.825 18:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:57.825 true 00:05:57.825 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:57.825 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.085 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.344 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:58.344 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:58.344 true 00:05:58.605 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:58.605 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.605 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.865 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:58.865 18:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:59.124 true 00:05:59.124 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:59.124 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.124 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.384 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:59.384 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:59.646 true 00:05:59.646 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:05:59.646 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.906 18:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.906 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:59.906 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:00.168 true 00:06:00.168 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:00.168 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.428 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.428 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:00.428 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:00.687 true 00:06:00.687 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:00.687 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.947 18:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.208 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:01.208 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:01.208 true 00:06:01.208 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:01.208 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.469 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.729 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:01.729 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:01.729 true 00:06:01.729 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:01.729 18:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.989 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.249 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:02.249 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:02.249 true 00:06:02.249 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:02.249 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.511 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.772 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:02.772 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:02.772 true 00:06:02.772 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:02.772 18:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.032 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.292 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:03.292 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:03.292 true 00:06:03.552 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:03.552 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.552 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.812 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:03.812 18:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:04.072 true 00:06:04.072 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:04.072 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.072 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.333 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:04.333 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:04.594 true 00:06:04.594 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:04.594 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.594 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.854 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:04.854 18:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:05.114 true 00:06:05.114 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:05.114 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.374 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.374 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:05.374 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:05.634 true 00:06:05.634 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:05.635 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.895 18:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.895 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:05.895 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:06.159 true 00:06:06.159 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:06.159 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.419 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.419 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:06.419 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:06.679 true 00:06:06.679 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:06.679 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.938 18:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.198 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:07.198 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:07.198 true 00:06:07.198 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:07.198 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.457 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.717 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:07.717 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:07.717 true 00:06:07.717 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:07.717 18:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.976 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.235 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:08.235 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:08.235 true 00:06:08.495 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:08.495 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.495 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.755 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:08.755 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:09.014 true 00:06:09.014 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:09.014 18:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.014 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.275 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:09.275 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:09.536 true 00:06:09.536 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:09.536 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.536 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.796 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:09.796 18:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:10.056 true 00:06:10.056 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:10.056 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.316 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.316 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:10.316 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:10.576 true 00:06:10.576 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:10.576 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.837 18:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.837 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:10.837 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:11.098 true 00:06:11.098 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:11.098 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.359 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.359 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:11.359 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:11.619 true 00:06:11.619 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:11.619 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.878 18:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.140 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:12.140 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:12.140 true 00:06:12.140 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:12.140 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.401 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.661 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:12.661 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:12.661 true 00:06:12.661 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:12.661 18:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.921 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.182 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:13.182 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:13.182 true 00:06:13.182 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:13.182 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.442 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.703 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:13.703 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:13.703 true 00:06:13.962 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:13.962 18:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.962 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.222 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:14.222 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:14.484 true 00:06:14.484 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:14.484 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.484 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.745 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:14.745 18:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:15.017 true 00:06:15.017 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:15.017 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.017 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.277 Initializing NVMe Controllers 00:06:15.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.277 Controller IO queue size 128, less than required. 00:06:15.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:15.277 Initialization complete. Launching workers. 00:06:15.277 ======================================================== 00:06:15.277 Latency(us) 00:06:15.277 Device Information : IOPS MiB/s Average min max 00:06:15.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30757.06 15.02 4161.71 1116.97 11319.83 00:06:15.277 ======================================================== 00:06:15.277 Total : 30757.06 15.02 4161.71 1116.97 11319.83 00:06:15.277 00:06:15.277 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:15.277 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:15.537 true 00:06:15.537 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2726151 00:06:15.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2726151) - No such process 00:06:15.537 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2726151 00:06:15.537 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.797 18:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:16.056 null0 00:06:16.056 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.056 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.056 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:16.317 null1 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:16.317 null2 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.317 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:16.577 null3 00:06:16.577 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.577 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.577 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:16.836 null4 00:06:16.836 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.836 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.836 18:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:16.836 null5 00:06:16.836 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.836 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.836 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:17.097 null6 00:06:17.097 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.097 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.097 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:17.357 null7 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:17.357 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2732713 2732714 2732716 2732718 2732720 2732722 2732724 2732725 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.358 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.618 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.619 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.879 18:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.879 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.879 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.879 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.880 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.140 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.401 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.663 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.925 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.186 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.448 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.710 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.972 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.972 18:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.972 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.233 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.234 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.495 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.759 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.759 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.759 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.759 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.760 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.022 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.283 rmmod nvme_tcp 00:06:21.283 rmmod nvme_fabrics 00:06:21.283 rmmod nvme_keyring 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2725454 ']' 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2725454 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2725454 ']' 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2725454 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.283 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2725454 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2725454' 00:06:21.545 killing process with pid 2725454 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2725454 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2725454 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.545 18:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.085 00:06:24.085 real 0m49.282s 00:06:24.085 user 3m20.673s 00:06:24.085 sys 0m17.581s 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:24.085 ************************************ 00:06:24.085 END TEST nvmf_ns_hotplug_stress 00:06:24.085 ************************************ 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.085 ************************************ 00:06:24.085 START TEST nvmf_delete_subsystem 00:06:24.085 ************************************ 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:24.085 * Looking for test storage... 00:06:24.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.085 18:56:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.085 --rc genhtml_branch_coverage=1 00:06:24.085 --rc genhtml_function_coverage=1 00:06:24.085 --rc genhtml_legend=1 00:06:24.085 --rc geninfo_all_blocks=1 00:06:24.085 --rc geninfo_unexecuted_blocks=1 00:06:24.085 00:06:24.085 ' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.085 --rc genhtml_branch_coverage=1 00:06:24.085 --rc genhtml_function_coverage=1 00:06:24.085 --rc genhtml_legend=1 00:06:24.085 --rc geninfo_all_blocks=1 00:06:24.085 --rc geninfo_unexecuted_blocks=1 00:06:24.085 00:06:24.085 ' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.085 --rc genhtml_branch_coverage=1 00:06:24.085 --rc genhtml_function_coverage=1 00:06:24.085 --rc genhtml_legend=1 00:06:24.085 --rc geninfo_all_blocks=1 00:06:24.085 --rc geninfo_unexecuted_blocks=1 00:06:24.085 00:06:24.085 ' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.085 --rc genhtml_branch_coverage=1 00:06:24.085 --rc genhtml_function_coverage=1 00:06:24.085 --rc genhtml_legend=1 00:06:24.085 --rc geninfo_all_blocks=1 00:06:24.085 --rc geninfo_unexecuted_blocks=1 00:06:24.085 00:06:24.085 ' 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.085 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.086 18:56:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:32.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:32.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:32.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.230 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:32.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:06:32.231 00:06:32.231 --- 10.0.0.2 ping statistics --- 00:06:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.231 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:06:32.231 00:06:32.231 --- 10.0.0.1 ping statistics --- 00:06:32.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.231 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2737900 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2737900 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2737900 ']' 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.231 18:56:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.231 [2024-11-26 18:56:48.633343] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:32.231 [2024-11-26 18:56:48.633409] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.231 [2024-11-26 18:56:48.735647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.231 [2024-11-26 18:56:48.787287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.231 [2024-11-26 18:56:48.787340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.231 [2024-11-26 18:56:48.787354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.231 [2024-11-26 18:56:48.787361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.231 [2024-11-26 18:56:48.787367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.231 [2024-11-26 18:56:48.788992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.231 [2024-11-26 18:56:48.788996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 [2024-11-26 18:56:49.501487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 [2024-11-26 18:56:49.525788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 NULL1 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 Delay0 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2738249 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:32.494 18:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:32.494 [2024-11-26 18:56:49.652828] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:34.411 18:56:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.411 18:56:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.411 18:56:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 [2024-11-26 18:56:51.898460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a62c0 is same with the state(6) to be set 00:06:34.983 starting I/O failed: -6 00:06:34.983 starting I/O failed: -6 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Write completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 starting I/O failed: -6 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.983 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 Read completed with error (sct=0, sc=8) 00:06:34.984 Write completed with error (sct=0, sc=8) 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:34.984 starting I/O failed: -6 00:06:35.927 [2024-11-26 18:56:52.874444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a79b0 is same with the state(6) to be set 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 [2024-11-26 18:56:52.902045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a64a0 is same with the state(6) to be set 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 [2024-11-26 18:56:52.902381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a6860 is same with the state(6) to be set 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Write completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.927 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 [2024-11-26 18:56:52.907022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fee6800d7c0 is same with the state(6) to be set 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Write completed with error (sct=0, sc=8) 00:06:35.928 Read completed with error (sct=0, sc=8) 00:06:35.928 [2024-11-26 18:56:52.907226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fee6800d020 is same with the state(6) to be set 00:06:35.928 Initializing NVMe Controllers 00:06:35.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.928 Controller IO queue size 128, less than required. 00:06:35.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:35.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:35.928 Initialization complete. Launching workers. 00:06:35.928 ======================================================== 00:06:35.928 Latency(us) 00:06:35.928 Device Information : IOPS MiB/s Average min max 00:06:35.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.30 0.08 899757.67 309.93 1006924.07 00:06:35.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.76 0.09 932314.08 388.77 1011803.08 00:06:35.928 ======================================================== 00:06:35.928 Total : 347.07 0.17 916526.32 309.93 1011803.08 00:06:35.928 00:06:35.928 [2024-11-26 18:56:52.907826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a79b0 (9): Bad file descriptor 00:06:35.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:35.928 18:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.928 18:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:35.928 18:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2738249 00:06:35.928 18:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2738249 00:06:36.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2738249) - No such process 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2738249 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2738249 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2738249 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.499 [2024-11-26 18:56:53.436948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2738930 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:36.499 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.499 [2024-11-26 18:56:53.535477] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:36.762 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:36.762 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:36.762 18:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.335 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.335 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:37.335 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:37.905 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.905 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:37.905 18:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.477 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.477 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:38.478 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.049 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.049 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:39.049 18:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.309 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.309 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:39.309 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.879 Initializing NVMe Controllers 00:06:39.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:39.879 Controller IO queue size 128, less than required. 00:06:39.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:39.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:39.879 Initialization complete. Launching workers. 00:06:39.879 ======================================================== 00:06:39.879 Latency(us) 00:06:39.879 Device Information : IOPS MiB/s Average min max 00:06:39.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001952.55 1000124.19 1005500.03 00:06:39.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003598.90 1000169.85 1041440.99 00:06:39.879 ======================================================== 00:06:39.879 Total : 256.00 0.12 1002775.72 1000124.19 1041440.99 00:06:39.879 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738930 00:06:39.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2738930) - No such process 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2738930 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.879 18:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.879 rmmod nvme_tcp 00:06:39.879 rmmod nvme_fabrics 00:06:39.879 rmmod nvme_keyring 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2737900 ']' 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2737900 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2737900 ']' 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2737900 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.879 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737900 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737900' 00:06:40.146 killing process with pid 2737900 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2737900 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2737900 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.146 18:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.131 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:42.131 00:06:42.131 real 0m18.498s 00:06:42.131 user 0m31.242s 00:06:42.131 sys 0m6.870s 00:06:42.131 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.132 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.132 ************************************ 00:06:42.132 END TEST nvmf_delete_subsystem 00:06:42.132 ************************************ 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.401 ************************************ 00:06:42.401 START TEST nvmf_host_management 00:06:42.401 ************************************ 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:42.401 * Looking for test storage... 00:06:42.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:42.401 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.402 --rc genhtml_branch_coverage=1 00:06:42.402 --rc genhtml_function_coverage=1 00:06:42.402 --rc genhtml_legend=1 00:06:42.402 --rc geninfo_all_blocks=1 00:06:42.402 --rc geninfo_unexecuted_blocks=1 00:06:42.402 00:06:42.402 ' 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.402 --rc genhtml_branch_coverage=1 00:06:42.402 --rc genhtml_function_coverage=1 00:06:42.402 --rc genhtml_legend=1 00:06:42.402 --rc geninfo_all_blocks=1 00:06:42.402 --rc geninfo_unexecuted_blocks=1 00:06:42.402 00:06:42.402 ' 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.402 --rc genhtml_branch_coverage=1 00:06:42.402 --rc genhtml_function_coverage=1 00:06:42.402 --rc genhtml_legend=1 00:06:42.402 --rc geninfo_all_blocks=1 00:06:42.402 --rc geninfo_unexecuted_blocks=1 00:06:42.402 00:06:42.402 ' 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.402 --rc genhtml_branch_coverage=1 00:06:42.402 --rc genhtml_function_coverage=1 00:06:42.402 --rc genhtml_legend=1 00:06:42.402 --rc geninfo_all_blocks=1 00:06:42.402 --rc geninfo_unexecuted_blocks=1 00:06:42.402 00:06:42.402 ' 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.402 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.664 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.665 18:56:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:50.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.809 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:50.810 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:50.810 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:50.810 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.810 18:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:06:50.810 00:06:50.810 --- 10.0.0.2 ping statistics --- 00:06:50.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.810 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:06:50.810 00:06:50.810 --- 10.0.0.1 ping statistics --- 00:06:50.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.810 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:50.810 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2744063 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2744063 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2744063 ']' 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.811 18:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.811 [2024-11-26 18:57:07.269181] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:50.811 [2024-11-26 18:57:07.269243] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.811 [2024-11-26 18:57:07.371773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.811 [2024-11-26 18:57:07.425737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.811 [2024-11-26 18:57:07.425788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.811 [2024-11-26 18:57:07.425797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.811 [2024-11-26 18:57:07.425804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.811 [2024-11-26 18:57:07.425810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.811 [2024-11-26 18:57:07.427911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.811 [2024-11-26 18:57:07.428073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.811 [2024-11-26 18:57:07.428121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.811 [2024-11-26 18:57:07.428122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.073 [2024-11-26 18:57:08.141891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.073 Malloc0 00:06:51.073 [2024-11-26 18:57:08.226614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.073 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2744333 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2744333 /var/tmp/bdevperf.sock 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2744333 ']' 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:51.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:51.336 { 00:06:51.336 "params": { 00:06:51.336 "name": "Nvme$subsystem", 00:06:51.336 "trtype": "$TEST_TRANSPORT", 00:06:51.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:51.336 "adrfam": "ipv4", 00:06:51.336 "trsvcid": "$NVMF_PORT", 00:06:51.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:51.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:51.336 "hdgst": ${hdgst:-false}, 00:06:51.336 "ddgst": ${ddgst:-false} 00:06:51.336 }, 00:06:51.336 "method": "bdev_nvme_attach_controller" 00:06:51.336 } 00:06:51.336 EOF 00:06:51.336 )") 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:51.336 18:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:51.336 "params": { 00:06:51.336 "name": "Nvme0", 00:06:51.336 "trtype": "tcp", 00:06:51.336 "traddr": "10.0.0.2", 00:06:51.336 "adrfam": "ipv4", 00:06:51.336 "trsvcid": "4420", 00:06:51.336 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:51.336 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:51.336 "hdgst": false, 00:06:51.336 "ddgst": false 00:06:51.336 }, 00:06:51.336 "method": "bdev_nvme_attach_controller" 00:06:51.336 }' 00:06:51.336 [2024-11-26 18:57:08.337634] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:51.336 [2024-11-26 18:57:08.337702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744333 ] 00:06:51.336 [2024-11-26 18:57:08.433422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.336 [2024-11-26 18:57:08.488244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.908 Running I/O for 10 seconds... 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=668 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 668 -ge 100 ']' 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.170 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 [2024-11-26 18:57:09.234671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.234880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23150 is same with the state(6) to be set 00:06:52.170 [2024-11-26 18:57:09.238198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.170 [2024-11-26 18:57:09.238457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.170 [2024-11-26 18:57:09.238465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.238984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.238991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 [2024-11-26 18:57:09.239354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.171 [2024-11-26 18:57:09.239361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.171 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.171 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:52.171 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.171 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.171 [2024-11-26 18:57:09.240614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:52.171 task offset: 101888 on job bdev=Nvme0n1 fails 00:06:52.171 00:06:52.171 Latency(us) 00:06:52.171 [2024-11-26T17:57:09.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.171 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:52.171 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:52.171 Verification LBA range: start 0x0 length 0x400 00:06:52.171 Nvme0n1 : 0.42 1846.66 115.42 154.09 0.00 30930.04 1815.89 35826.35 00:06:52.171 [2024-11-26T17:57:09.385Z] =================================================================================================================== 00:06:52.172 [2024-11-26T17:57:09.385Z] Total : 1846.66 115.42 154.09 0.00 30930.04 1815.89 35826.35 00:06:52.172 [2024-11-26 18:57:09.242639] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.172 [2024-11-26 18:57:09.242665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cd010 (9): Bad file descriptor 00:06:52.172 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.172 18:57:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:52.172 [2024-11-26 18:57:09.294814] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2744333 00:06:53.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2744333) - No such process 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:53.115 { 00:06:53.115 "params": { 00:06:53.115 "name": "Nvme$subsystem", 00:06:53.115 "trtype": "$TEST_TRANSPORT", 00:06:53.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:53.115 "adrfam": "ipv4", 00:06:53.115 "trsvcid": "$NVMF_PORT", 00:06:53.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:53.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:53.115 "hdgst": ${hdgst:-false}, 00:06:53.115 "ddgst": ${ddgst:-false} 00:06:53.115 }, 00:06:53.115 "method": "bdev_nvme_attach_controller" 00:06:53.115 } 00:06:53.115 EOF 00:06:53.115 )") 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:53.115 18:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:53.115 "params": { 00:06:53.115 "name": "Nvme0", 00:06:53.115 "trtype": "tcp", 00:06:53.115 "traddr": "10.0.0.2", 00:06:53.116 "adrfam": "ipv4", 00:06:53.116 "trsvcid": "4420", 00:06:53.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:53.116 "hdgst": false, 00:06:53.116 "ddgst": false 00:06:53.116 }, 00:06:53.116 "method": "bdev_nvme_attach_controller" 00:06:53.116 }' 00:06:53.116 [2024-11-26 18:57:10.309046] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:06:53.116 [2024-11-26 18:57:10.309104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745061 ] 00:06:53.376 [2024-11-26 18:57:10.399051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.376 [2024-11-26 18:57:10.435179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.637 Running I/O for 1 seconds... 00:06:54.581 1736.00 IOPS, 108.50 MiB/s 00:06:54.581 Latency(us) 00:06:54.581 [2024-11-26T17:57:11.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.581 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:54.581 Verification LBA range: start 0x0 length 0x400 00:06:54.581 Nvme0n1 : 1.01 1786.88 111.68 0.00 0.00 35068.27 1495.04 32549.55 00:06:54.581 [2024-11-26T17:57:11.794Z] =================================================================================================================== 00:06:54.581 [2024-11-26T17:57:11.794Z] Total : 1786.88 111.68 0.00 0.00 35068.27 1495.04 32549.55 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.581 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.581 rmmod nvme_tcp 00:06:54.843 rmmod nvme_fabrics 00:06:54.843 rmmod nvme_keyring 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2744063 ']' 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2744063 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2744063 ']' 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2744063 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744063 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744063' 00:06:54.843 killing process with pid 2744063 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2744063 00:06:54.843 18:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2744063 00:06:54.843 [2024-11-26 18:57:12.000192] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:54.843 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:54.844 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.844 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:54.844 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.844 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.844 18:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:57.391 00:06:57.391 real 0m14.713s 00:06:57.391 user 0m23.131s 00:06:57.391 sys 0m6.790s 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.391 ************************************ 00:06:57.391 END TEST nvmf_host_management 00:06:57.391 ************************************ 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.391 ************************************ 00:06:57.391 START TEST nvmf_lvol 00:06:57.391 ************************************ 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:57.391 * Looking for test storage... 00:06:57.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:57.391 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.392 --rc genhtml_branch_coverage=1 00:06:57.392 --rc genhtml_function_coverage=1 00:06:57.392 --rc genhtml_legend=1 00:06:57.392 --rc geninfo_all_blocks=1 00:06:57.392 --rc geninfo_unexecuted_blocks=1 00:06:57.392 00:06:57.392 ' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.392 --rc genhtml_branch_coverage=1 00:06:57.392 --rc genhtml_function_coverage=1 00:06:57.392 --rc genhtml_legend=1 00:06:57.392 --rc geninfo_all_blocks=1 00:06:57.392 --rc geninfo_unexecuted_blocks=1 00:06:57.392 00:06:57.392 ' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.392 --rc genhtml_branch_coverage=1 00:06:57.392 --rc genhtml_function_coverage=1 00:06:57.392 --rc genhtml_legend=1 00:06:57.392 --rc geninfo_all_blocks=1 00:06:57.392 --rc geninfo_unexecuted_blocks=1 00:06:57.392 00:06:57.392 ' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.392 --rc genhtml_branch_coverage=1 00:06:57.392 --rc genhtml_function_coverage=1 00:06:57.392 --rc genhtml_legend=1 00:06:57.392 --rc geninfo_all_blocks=1 00:06:57.392 --rc geninfo_unexecuted_blocks=1 00:06:57.392 00:06:57.392 ' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.392 18:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.542 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:05.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:05.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:05.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:05.543 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:05.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:07:05.543 00:07:05.543 --- 10.0.0.2 ping statistics --- 00:07:05.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.543 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:07:05.543 00:07:05.543 --- 10.0.0.1 ping statistics --- 00:07:05.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.543 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2749742 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2749742 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2749742 ']' 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.543 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.544 18:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.544 [2024-11-26 18:57:21.984132] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:05.544 [2024-11-26 18:57:21.984206] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.544 [2024-11-26 18:57:22.085023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.544 [2024-11-26 18:57:22.137170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.544 [2024-11-26 18:57:22.137224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.544 [2024-11-26 18:57:22.137233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.544 [2024-11-26 18:57:22.137240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.544 [2024-11-26 18:57:22.137246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.544 [2024-11-26 18:57:22.139094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.544 [2024-11-26 18:57:22.139205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.544 [2024-11-26 18:57:22.139206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.805 18:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.065 [2024-11-26 18:57:23.028804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.065 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:06.326 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:06.326 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:06.326 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:06.326 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:06.587 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:06.848 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=25668cf8-41e4-4de9-865f-9f2bb887f5ef 00:07:06.848 18:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25668cf8-41e4-4de9-865f-9f2bb887f5ef lvol 20 00:07:07.109 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1fac6266-a817-478a-97d8-cc3756c6f6bf 00:07:07.109 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.109 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1fac6266-a817-478a-97d8-cc3756c6f6bf 00:07:07.370 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:07.632 [2024-11-26 18:57:24.681298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.632 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.893 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2750317 00:07:07.893 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:07.893 18:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:08.837 18:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1fac6266-a817-478a-97d8-cc3756c6f6bf MY_SNAPSHOT 00:07:09.098 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1734c3e4-99a7-4623-8998-8eff5133fc88 00:07:09.098 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1fac6266-a817-478a-97d8-cc3756c6f6bf 30 00:07:09.358 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1734c3e4-99a7-4623-8998-8eff5133fc88 MY_CLONE 00:07:09.359 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=77af5cbf-79e3-4709-b34e-f3a5e5f42b5d 00:07:09.359 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 77af5cbf-79e3-4709-b34e-f3a5e5f42b5d 00:07:09.929 18:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2750317 00:07:19.927 Initializing NVMe Controllers 00:07:19.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:19.927 Controller IO queue size 128, less than required. 00:07:19.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:19.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:19.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:19.927 Initialization complete. Launching workers. 00:07:19.927 ======================================================== 00:07:19.927 Latency(us) 00:07:19.927 Device Information : IOPS MiB/s Average min max 00:07:19.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16592.79 64.82 7716.99 1915.01 69535.19 00:07:19.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16172.09 63.17 7914.52 3982.66 64793.87 00:07:19.927 ======================================================== 00:07:19.927 Total : 32764.88 127.99 7814.49 1915.01 69535.19 00:07:19.927 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1fac6266-a817-478a-97d8-cc3756c6f6bf 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25668cf8-41e4-4de9-865f-9f2bb887f5ef 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.927 rmmod nvme_tcp 00:07:19.927 rmmod nvme_fabrics 00:07:19.927 rmmod nvme_keyring 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2749742 ']' 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2749742 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2749742 ']' 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2749742 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2749742 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2749742' 00:07:19.927 killing process with pid 2749742 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2749742 00:07:19.927 18:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2749742 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.927 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.928 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.928 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.928 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.928 18:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.315 00:07:21.315 real 0m24.010s 00:07:21.315 user 1m5.060s 00:07:21.315 sys 0m8.742s 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 ************************************ 00:07:21.315 END TEST nvmf_lvol 00:07:21.315 ************************************ 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.315 ************************************ 00:07:21.315 START TEST nvmf_lvs_grow 00:07:21.315 ************************************ 00:07:21.315 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:21.315 * Looking for test storage... 00:07:21.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.316 --rc genhtml_branch_coverage=1 00:07:21.316 --rc genhtml_function_coverage=1 00:07:21.316 --rc genhtml_legend=1 00:07:21.316 --rc geninfo_all_blocks=1 00:07:21.316 --rc geninfo_unexecuted_blocks=1 00:07:21.316 00:07:21.316 ' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.316 --rc genhtml_branch_coverage=1 00:07:21.316 --rc genhtml_function_coverage=1 00:07:21.316 --rc genhtml_legend=1 00:07:21.316 --rc geninfo_all_blocks=1 00:07:21.316 --rc geninfo_unexecuted_blocks=1 00:07:21.316 00:07:21.316 ' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.316 --rc genhtml_branch_coverage=1 00:07:21.316 --rc genhtml_function_coverage=1 00:07:21.316 --rc genhtml_legend=1 00:07:21.316 --rc geninfo_all_blocks=1 00:07:21.316 --rc geninfo_unexecuted_blocks=1 00:07:21.316 00:07:21.316 ' 00:07:21.316 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.316 --rc genhtml_branch_coverage=1 00:07:21.317 --rc genhtml_function_coverage=1 00:07:21.317 --rc genhtml_legend=1 00:07:21.317 --rc geninfo_all_blocks=1 00:07:21.317 --rc geninfo_unexecuted_blocks=1 00:07:21.317 00:07:21.317 ' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.317 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.318 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.580 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.580 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.580 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.580 18:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:29.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:29.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:29.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:29.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.724 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:07:29.725 00:07:29.725 --- 10.0.0.2 ping statistics --- 00:07:29.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.725 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:07:29.725 18:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:29.725 00:07:29.725 --- 10.0.0.1 ping statistics --- 00:07:29.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.725 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2756851 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2756851 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2756851 ']' 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.725 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.725 [2024-11-26 18:57:46.125652] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:29.725 [2024-11-26 18:57:46.125715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.725 [2024-11-26 18:57:46.226279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.725 [2024-11-26 18:57:46.277355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.725 [2024-11-26 18:57:46.277412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.725 [2024-11-26 18:57:46.277421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.725 [2024-11-26 18:57:46.277428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.725 [2024-11-26 18:57:46.277435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.725 [2024-11-26 18:57:46.278218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.986 18:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.986 [2024-11-26 18:57:47.162322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.246 ************************************ 00:07:30.246 START TEST lvs_grow_clean 00:07:30.246 ************************************ 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.246 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.507 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:30.507 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:30.507 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:30.507 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:30.507 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:30.768 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:30.768 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:30.768 18:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1298b16c-6477-4982-83ef-1a2795c0c40c lvol 150 00:07:31.030 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=12eead63-21e0-4851-a6d7-9d89478a6ced 00:07:31.030 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.030 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:31.030 [2024-11-26 18:57:48.197055] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:31.030 [2024-11-26 18:57:48.197132] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:31.030 true 00:07:31.030 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:31.030 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.291 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.291 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.551 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12eead63-21e0-4851-a6d7-9d89478a6ced 00:07:31.812 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.812 [2024-11-26 18:57:48.971530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.812 18:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2757405 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2757405 /var/tmp/bdevperf.sock 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2757405 ']' 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.073 18:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:32.073 [2024-11-26 18:57:49.212515] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:32.073 [2024-11-26 18:57:49.212584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757405 ] 00:07:32.334 [2024-11-26 18:57:49.309291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.334 [2024-11-26 18:57:49.362265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.906 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.906 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:32.906 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.479 Nvme0n1 00:07:33.479 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.479 [ 00:07:33.479 { 00:07:33.479 "name": "Nvme0n1", 00:07:33.479 "aliases": [ 00:07:33.479 "12eead63-21e0-4851-a6d7-9d89478a6ced" 00:07:33.479 ], 00:07:33.479 "product_name": "NVMe disk", 00:07:33.479 "block_size": 4096, 00:07:33.479 "num_blocks": 38912, 00:07:33.479 "uuid": "12eead63-21e0-4851-a6d7-9d89478a6ced", 00:07:33.479 "numa_id": 0, 00:07:33.479 "assigned_rate_limits": { 00:07:33.479 "rw_ios_per_sec": 0, 00:07:33.479 "rw_mbytes_per_sec": 0, 00:07:33.479 "r_mbytes_per_sec": 0, 00:07:33.479 "w_mbytes_per_sec": 0 00:07:33.479 }, 00:07:33.479 "claimed": false, 00:07:33.479 "zoned": false, 00:07:33.479 "supported_io_types": { 00:07:33.479 "read": true, 00:07:33.479 "write": true, 00:07:33.479 "unmap": true, 00:07:33.479 "flush": true, 00:07:33.479 "reset": true, 00:07:33.479 "nvme_admin": true, 00:07:33.479 "nvme_io": true, 00:07:33.479 "nvme_io_md": false, 00:07:33.479 "write_zeroes": true, 00:07:33.479 "zcopy": false, 00:07:33.479 "get_zone_info": false, 00:07:33.479 "zone_management": false, 00:07:33.479 "zone_append": false, 00:07:33.479 "compare": true, 00:07:33.479 "compare_and_write": true, 00:07:33.479 "abort": true, 00:07:33.479 "seek_hole": false, 00:07:33.479 "seek_data": false, 00:07:33.479 "copy": true, 00:07:33.479 "nvme_iov_md": false 00:07:33.479 }, 00:07:33.479 "memory_domains": [ 00:07:33.479 { 00:07:33.479 "dma_device_id": "system", 00:07:33.479 "dma_device_type": 1 00:07:33.479 } 00:07:33.479 ], 00:07:33.479 "driver_specific": { 00:07:33.479 "nvme": [ 00:07:33.479 { 00:07:33.479 "trid": { 00:07:33.479 "trtype": "TCP", 00:07:33.479 "adrfam": "IPv4", 00:07:33.479 "traddr": "10.0.0.2", 00:07:33.479 "trsvcid": "4420", 00:07:33.479 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.479 }, 00:07:33.479 "ctrlr_data": { 00:07:33.479 "cntlid": 1, 00:07:33.479 "vendor_id": "0x8086", 00:07:33.479 "model_number": "SPDK bdev Controller", 00:07:33.479 "serial_number": "SPDK0", 00:07:33.479 "firmware_revision": "25.01", 00:07:33.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.479 "oacs": { 00:07:33.479 "security": 0, 00:07:33.479 "format": 0, 00:07:33.479 "firmware": 0, 00:07:33.479 "ns_manage": 0 00:07:33.479 }, 00:07:33.479 "multi_ctrlr": true, 00:07:33.479 "ana_reporting": false 00:07:33.479 }, 00:07:33.479 "vs": { 00:07:33.479 "nvme_version": "1.3" 00:07:33.479 }, 00:07:33.479 "ns_data": { 00:07:33.479 "id": 1, 00:07:33.479 "can_share": true 00:07:33.479 } 00:07:33.479 } 00:07:33.479 ], 00:07:33.479 "mp_policy": "active_passive" 00:07:33.479 } 00:07:33.479 } 00:07:33.479 ] 00:07:33.479 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2757739 00:07:33.479 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:33.479 18:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.740 Running I/O for 10 seconds... 00:07:34.682 Latency(us) 00:07:34.682 [2024-11-26T17:57:51.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.682 Nvme0n1 : 1.00 24875.00 97.17 0.00 0.00 0.00 0.00 0.00 00:07:34.682 [2024-11-26T17:57:51.895Z] =================================================================================================================== 00:07:34.682 [2024-11-26T17:57:51.895Z] Total : 24875.00 97.17 0.00 0.00 0.00 0.00 0.00 00:07:34.682 00:07:35.624 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:35.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.624 Nvme0n1 : 2.00 24721.50 96.57 0.00 0.00 0.00 0.00 0.00 00:07:35.624 [2024-11-26T17:57:52.837Z] =================================================================================================================== 00:07:35.624 [2024-11-26T17:57:52.837Z] Total : 24721.50 96.57 0.00 0.00 0.00 0.00 0.00 00:07:35.624 00:07:35.624 true 00:07:35.624 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:35.624 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:35.885 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:35.885 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:35.885 18:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2757739 00:07:36.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.825 Nvme0n1 : 3.00 24678.33 96.40 0.00 0.00 0.00 0.00 0.00 00:07:36.825 [2024-11-26T17:57:54.038Z] =================================================================================================================== 00:07:36.825 [2024-11-26T17:57:54.038Z] Total : 24678.33 96.40 0.00 0.00 0.00 0.00 0.00 00:07:36.825 00:07:37.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.766 Nvme0n1 : 4.00 24678.75 96.40 0.00 0.00 0.00 0.00 0.00 00:07:37.766 [2024-11-26T17:57:54.979Z] =================================================================================================================== 00:07:37.766 [2024-11-26T17:57:54.979Z] Total : 24678.75 96.40 0.00 0.00 0.00 0.00 0.00 00:07:37.766 00:07:38.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.706 Nvme0n1 : 5.00 24685.40 96.43 0.00 0.00 0.00 0.00 0.00 00:07:38.706 [2024-11-26T17:57:55.919Z] =================================================================================================================== 00:07:38.706 [2024-11-26T17:57:55.919Z] Total : 24685.40 96.43 0.00 0.00 0.00 0.00 0.00 00:07:38.706 00:07:39.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.648 Nvme0n1 : 6.00 24695.17 96.47 0.00 0.00 0.00 0.00 0.00 00:07:39.648 [2024-11-26T17:57:56.861Z] =================================================================================================================== 00:07:39.648 [2024-11-26T17:57:56.861Z] Total : 24695.17 96.47 0.00 0.00 0.00 0.00 0.00 00:07:39.648 00:07:40.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.589 Nvme0n1 : 7.00 24706.71 96.51 0.00 0.00 0.00 0.00 0.00 00:07:40.589 [2024-11-26T17:57:57.802Z] =================================================================================================================== 00:07:40.589 [2024-11-26T17:57:57.802Z] Total : 24706.71 96.51 0.00 0.00 0.00 0.00 0.00 00:07:40.589 00:07:41.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.532 Nvme0n1 : 8.00 24716.38 96.55 0.00 0.00 0.00 0.00 0.00 00:07:41.532 [2024-11-26T17:57:58.745Z] =================================================================================================================== 00:07:41.532 [2024-11-26T17:57:58.745Z] Total : 24716.38 96.55 0.00 0.00 0.00 0.00 0.00 00:07:41.532 00:07:42.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.916 Nvme0n1 : 9.00 24726.56 96.59 0.00 0.00 0.00 0.00 0.00 00:07:42.916 [2024-11-26T17:58:00.129Z] =================================================================================================================== 00:07:42.916 [2024-11-26T17:58:00.129Z] Total : 24726.56 96.59 0.00 0.00 0.00 0.00 0.00 00:07:42.916 00:07:43.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.858 Nvme0n1 : 10.00 24729.90 96.60 0.00 0.00 0.00 0.00 0.00 00:07:43.858 [2024-11-26T17:58:01.071Z] =================================================================================================================== 00:07:43.858 [2024-11-26T17:58:01.071Z] Total : 24729.90 96.60 0.00 0.00 0.00 0.00 0.00 00:07:43.858 00:07:43.858 00:07:43.858 Latency(us) 00:07:43.858 [2024-11-26T17:58:01.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.858 Nvme0n1 : 10.00 24729.54 96.60 0.00 0.00 5171.91 2157.23 8301.23 00:07:43.858 [2024-11-26T17:58:01.071Z] =================================================================================================================== 00:07:43.858 [2024-11-26T17:58:01.071Z] Total : 24729.54 96.60 0.00 0.00 5171.91 2157.23 8301.23 00:07:43.858 { 00:07:43.858 "results": [ 00:07:43.858 { 00:07:43.858 "job": "Nvme0n1", 00:07:43.858 "core_mask": "0x2", 00:07:43.858 "workload": "randwrite", 00:07:43.858 "status": "finished", 00:07:43.858 "queue_depth": 128, 00:07:43.858 "io_size": 4096, 00:07:43.858 "runtime": 10.004999, 00:07:43.858 "iops": 24729.53770410172, 00:07:43.858 "mibps": 96.59975665664734, 00:07:43.858 "io_failed": 0, 00:07:43.858 "io_timeout": 0, 00:07:43.858 "avg_latency_us": 5171.91234410723, 00:07:43.858 "min_latency_us": 2157.2266666666665, 00:07:43.858 "max_latency_us": 8301.226666666667 00:07:43.858 } 00:07:43.858 ], 00:07:43.858 "core_count": 1 00:07:43.858 } 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2757405 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2757405 ']' 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2757405 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2757405 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2757405' 00:07:43.858 killing process with pid 2757405 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2757405 00:07:43.858 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.858 00:07:43.858 Latency(us) 00:07:43.858 [2024-11-26T17:58:01.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.858 [2024-11-26T17:58:01.071Z] =================================================================================================================== 00:07:43.858 [2024-11-26T17:58:01.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2757405 00:07:43.858 18:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.138 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.138 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:44.138 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.497 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.497 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:44.498 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.498 [2024-11-26 18:58:01.645510] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:44.834 request: 00:07:44.834 { 00:07:44.834 "uuid": "1298b16c-6477-4982-83ef-1a2795c0c40c", 00:07:44.834 "method": "bdev_lvol_get_lvstores", 00:07:44.834 "req_id": 1 00:07:44.834 } 00:07:44.834 Got JSON-RPC error response 00:07:44.834 response: 00:07:44.834 { 00:07:44.834 "code": -19, 00:07:44.834 "message": "No such device" 00:07:44.834 } 00:07:44.834 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:44.835 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.835 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.835 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.835 18:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.114 aio_bdev 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 12eead63-21e0-4851-a6d7-9d89478a6ced 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=12eead63-21e0-4851-a6d7-9d89478a6ced 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.114 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.115 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12eead63-21e0-4851-a6d7-9d89478a6ced -t 2000 00:07:45.374 [ 00:07:45.374 { 00:07:45.374 "name": "12eead63-21e0-4851-a6d7-9d89478a6ced", 00:07:45.374 "aliases": [ 00:07:45.374 "lvs/lvol" 00:07:45.374 ], 00:07:45.374 "product_name": "Logical Volume", 00:07:45.374 "block_size": 4096, 00:07:45.375 "num_blocks": 38912, 00:07:45.375 "uuid": "12eead63-21e0-4851-a6d7-9d89478a6ced", 00:07:45.375 "assigned_rate_limits": { 00:07:45.375 "rw_ios_per_sec": 0, 00:07:45.375 "rw_mbytes_per_sec": 0, 00:07:45.375 "r_mbytes_per_sec": 0, 00:07:45.375 "w_mbytes_per_sec": 0 00:07:45.375 }, 00:07:45.375 "claimed": false, 00:07:45.375 "zoned": false, 00:07:45.375 "supported_io_types": { 00:07:45.375 "read": true, 00:07:45.375 "write": true, 00:07:45.375 "unmap": true, 00:07:45.375 "flush": false, 00:07:45.375 "reset": true, 00:07:45.375 "nvme_admin": false, 00:07:45.375 "nvme_io": false, 00:07:45.375 "nvme_io_md": false, 00:07:45.375 "write_zeroes": true, 00:07:45.375 "zcopy": false, 00:07:45.375 "get_zone_info": false, 00:07:45.375 "zone_management": false, 00:07:45.375 "zone_append": false, 00:07:45.375 "compare": false, 00:07:45.375 "compare_and_write": false, 00:07:45.375 "abort": false, 00:07:45.375 "seek_hole": true, 00:07:45.375 "seek_data": true, 00:07:45.375 "copy": false, 00:07:45.375 "nvme_iov_md": false 00:07:45.375 }, 00:07:45.375 "driver_specific": { 00:07:45.375 "lvol": { 00:07:45.375 "lvol_store_uuid": "1298b16c-6477-4982-83ef-1a2795c0c40c", 00:07:45.375 "base_bdev": "aio_bdev", 00:07:45.375 "thin_provision": false, 00:07:45.375 "num_allocated_clusters": 38, 00:07:45.375 "snapshot": false, 00:07:45.375 "clone": false, 00:07:45.375 "esnap_clone": false 00:07:45.375 } 00:07:45.375 } 00:07:45.375 } 00:07:45.375 ] 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:45.375 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:45.634 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:45.634 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12eead63-21e0-4851-a6d7-9d89478a6ced 00:07:45.894 18:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1298b16c-6477-4982-83ef-1a2795c0c40c 00:07:45.894 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.154 00:07:46.154 real 0m16.067s 00:07:46.154 user 0m15.574s 00:07:46.154 sys 0m1.604s 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:46.154 ************************************ 00:07:46.154 END TEST lvs_grow_clean 00:07:46.154 ************************************ 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.154 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.413 ************************************ 00:07:46.413 START TEST lvs_grow_dirty 00:07:46.413 ************************************ 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.413 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.673 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=10842f6e-8cc7-47dd-8baf-ce151792153b 00:07:46.673 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:07:46.673 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.932 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.932 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.932 18:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10842f6e-8cc7-47dd-8baf-ce151792153b lvol 150 00:07:46.932 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:07:46.932 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.932 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.191 [2024-11-26 18:58:04.257783] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.191 [2024-11-26 18:58:04.257825] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.191 true 00:07:47.191 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:07:47.191 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.450 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.450 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.450 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:07:47.709 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:47.968 [2024-11-26 18:58:04.919713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.968 18:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2760747 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2760747 /var/tmp/bdevperf.sock 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2760747 ']' 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.968 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.968 [2024-11-26 18:58:05.152743] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:07:47.968 [2024-11-26 18:58:05.152796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760747 ] 00:07:48.228 [2024-11-26 18:58:05.234681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.228 [2024-11-26 18:58:05.264591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.798 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.798 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:48.798 18:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.058 Nvme0n1 00:07:49.058 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.318 [ 00:07:49.318 { 00:07:49.318 "name": "Nvme0n1", 00:07:49.318 "aliases": [ 00:07:49.318 "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd" 00:07:49.318 ], 00:07:49.318 "product_name": "NVMe disk", 00:07:49.318 "block_size": 4096, 00:07:49.318 "num_blocks": 38912, 00:07:49.318 "uuid": "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd", 00:07:49.318 "numa_id": 0, 00:07:49.318 "assigned_rate_limits": { 00:07:49.318 "rw_ios_per_sec": 0, 00:07:49.318 "rw_mbytes_per_sec": 0, 00:07:49.318 "r_mbytes_per_sec": 0, 00:07:49.318 "w_mbytes_per_sec": 0 00:07:49.318 }, 00:07:49.318 "claimed": false, 00:07:49.318 "zoned": false, 00:07:49.318 "supported_io_types": { 00:07:49.318 "read": true, 00:07:49.318 "write": true, 00:07:49.318 "unmap": true, 00:07:49.318 "flush": true, 00:07:49.318 "reset": true, 00:07:49.318 "nvme_admin": true, 00:07:49.318 "nvme_io": true, 00:07:49.318 "nvme_io_md": false, 00:07:49.318 "write_zeroes": true, 00:07:49.318 "zcopy": false, 00:07:49.318 "get_zone_info": false, 00:07:49.318 "zone_management": false, 00:07:49.318 "zone_append": false, 00:07:49.318 "compare": true, 00:07:49.318 "compare_and_write": true, 00:07:49.318 "abort": true, 00:07:49.318 "seek_hole": false, 00:07:49.318 "seek_data": false, 00:07:49.318 "copy": true, 00:07:49.318 "nvme_iov_md": false 00:07:49.318 }, 00:07:49.318 "memory_domains": [ 00:07:49.318 { 00:07:49.318 "dma_device_id": "system", 00:07:49.318 "dma_device_type": 1 00:07:49.318 } 00:07:49.318 ], 00:07:49.318 "driver_specific": { 00:07:49.318 "nvme": [ 00:07:49.318 { 00:07:49.318 "trid": { 00:07:49.318 "trtype": "TCP", 00:07:49.318 "adrfam": "IPv4", 00:07:49.318 "traddr": "10.0.0.2", 00:07:49.318 "trsvcid": "4420", 00:07:49.318 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.318 }, 00:07:49.318 "ctrlr_data": { 00:07:49.318 "cntlid": 1, 00:07:49.318 "vendor_id": "0x8086", 00:07:49.318 "model_number": "SPDK bdev Controller", 00:07:49.318 "serial_number": "SPDK0", 00:07:49.318 "firmware_revision": "25.01", 00:07:49.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.318 "oacs": { 00:07:49.318 "security": 0, 00:07:49.318 "format": 0, 00:07:49.318 "firmware": 0, 00:07:49.318 "ns_manage": 0 00:07:49.318 }, 00:07:49.318 "multi_ctrlr": true, 00:07:49.318 "ana_reporting": false 00:07:49.318 }, 00:07:49.318 "vs": { 00:07:49.318 "nvme_version": "1.3" 00:07:49.318 }, 00:07:49.318 "ns_data": { 00:07:49.318 "id": 1, 00:07:49.318 "can_share": true 00:07:49.318 } 00:07:49.318 } 00:07:49.318 ], 00:07:49.318 "mp_policy": "active_passive" 00:07:49.318 } 00:07:49.318 } 00:07:49.318 ] 00:07:49.318 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2760870 00:07:49.318 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.318 18:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.318 Running I/O for 10 seconds... 00:07:50.262 Latency(us) 00:07:50.262 [2024-11-26T17:58:07.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.262 Nvme0n1 : 1.00 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:50.262 [2024-11-26T17:58:07.475Z] =================================================================================================================== 00:07:50.262 [2024-11-26T17:58:07.475Z] Total : 25047.00 97.84 0.00 0.00 0.00 0.00 0.00 00:07:50.262 00:07:51.212 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:07:51.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.474 Nvme0n1 : 2.00 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:07:51.474 [2024-11-26T17:58:08.687Z] =================================================================================================================== 00:07:51.474 [2024-11-26T17:58:08.687Z] Total : 25195.00 98.42 0.00 0.00 0.00 0.00 0.00 00:07:51.474 00:07:51.474 true 00:07:51.474 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:07:51.474 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:51.734 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:51.734 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:51.734 18:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2760870 00:07:52.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.304 Nvme0n1 : 3.00 25266.33 98.70 0.00 0.00 0.00 0.00 0.00 00:07:52.304 [2024-11-26T17:58:09.517Z] =================================================================================================================== 00:07:52.304 [2024-11-26T17:58:09.517Z] Total : 25266.33 98.70 0.00 0.00 0.00 0.00 0.00 00:07:52.304 00:07:53.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.685 Nvme0n1 : 4.00 25296.25 98.81 0.00 0.00 0.00 0.00 0.00 00:07:53.685 [2024-11-26T17:58:10.898Z] =================================================================================================================== 00:07:53.685 [2024-11-26T17:58:10.898Z] Total : 25296.25 98.81 0.00 0.00 0.00 0.00 0.00 00:07:53.685 00:07:54.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.625 Nvme0n1 : 5.00 25343.00 99.00 0.00 0.00 0.00 0.00 0.00 00:07:54.625 [2024-11-26T17:58:11.838Z] =================================================================================================================== 00:07:54.625 [2024-11-26T17:58:11.838Z] Total : 25343.00 99.00 0.00 0.00 0.00 0.00 0.00 00:07:54.625 00:07:55.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.563 Nvme0n1 : 6.00 25369.83 99.10 0.00 0.00 0.00 0.00 0.00 00:07:55.563 [2024-11-26T17:58:12.776Z] =================================================================================================================== 00:07:55.563 [2024-11-26T17:58:12.776Z] Total : 25369.83 99.10 0.00 0.00 0.00 0.00 0.00 00:07:55.563 00:07:56.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.502 Nvme0n1 : 7.00 25397.71 99.21 0.00 0.00 0.00 0.00 0.00 00:07:56.502 [2024-11-26T17:58:13.715Z] =================================================================================================================== 00:07:56.502 [2024-11-26T17:58:13.715Z] Total : 25397.71 99.21 0.00 0.00 0.00 0.00 0.00 00:07:56.502 00:07:57.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.441 Nvme0n1 : 8.00 25414.88 99.28 0.00 0.00 0.00 0.00 0.00 00:07:57.441 [2024-11-26T17:58:14.654Z] =================================================================================================================== 00:07:57.441 [2024-11-26T17:58:14.654Z] Total : 25414.88 99.28 0.00 0.00 0.00 0.00 0.00 00:07:57.441 00:07:58.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.382 Nvme0n1 : 9.00 25428.22 99.33 0.00 0.00 0.00 0.00 0.00 00:07:58.382 [2024-11-26T17:58:15.595Z] =================================================================================================================== 00:07:58.382 [2024-11-26T17:58:15.595Z] Total : 25428.22 99.33 0.00 0.00 0.00 0.00 0.00 00:07:58.382 00:07:59.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.321 Nvme0n1 : 10.00 25445.00 99.39 0.00 0.00 0.00 0.00 0.00 00:07:59.321 [2024-11-26T17:58:16.534Z] =================================================================================================================== 00:07:59.321 [2024-11-26T17:58:16.534Z] Total : 25445.00 99.39 0.00 0.00 0.00 0.00 0.00 00:07:59.321 00:07:59.321 00:07:59.321 Latency(us) 00:07:59.321 [2024-11-26T17:58:16.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.321 Nvme0n1 : 10.00 25442.55 99.38 0.00 0.00 5027.71 1597.44 8683.52 00:07:59.321 [2024-11-26T17:58:16.534Z] =================================================================================================================== 00:07:59.321 [2024-11-26T17:58:16.534Z] Total : 25442.55 99.38 0.00 0.00 5027.71 1597.44 8683.52 00:07:59.321 { 00:07:59.321 "results": [ 00:07:59.321 { 00:07:59.321 "job": "Nvme0n1", 00:07:59.321 "core_mask": "0x2", 00:07:59.321 "workload": "randwrite", 00:07:59.321 "status": "finished", 00:07:59.321 "queue_depth": 128, 00:07:59.321 "io_size": 4096, 00:07:59.321 "runtime": 10.003439, 00:07:59.321 "iops": 25442.55030694944, 00:07:59.321 "mibps": 99.38496213652125, 00:07:59.321 "io_failed": 0, 00:07:59.321 "io_timeout": 0, 00:07:59.321 "avg_latency_us": 5027.707148318554, 00:07:59.321 "min_latency_us": 1597.44, 00:07:59.321 "max_latency_us": 8683.52 00:07:59.321 } 00:07:59.321 ], 00:07:59.321 "core_count": 1 00:07:59.321 } 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2760747 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2760747 ']' 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2760747 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.321 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760747 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760747' 00:07:59.582 killing process with pid 2760747 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2760747 00:07:59.582 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.582 00:07:59.582 Latency(us) 00:07:59.582 [2024-11-26T17:58:16.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.582 [2024-11-26T17:58:16.795Z] =================================================================================================================== 00:07:59.582 [2024-11-26T17:58:16.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2760747 00:07:59.582 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.842 18:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2756851 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2756851 00:08:00.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2756851 Killed "${NVMF_APP[@]}" "$@" 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.103 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.363 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2763194 00:08:00.363 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2763194 00:08:00.363 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.363 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2763194 ']' 00:08:00.363 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.364 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.364 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.364 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.364 18:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.364 [2024-11-26 18:58:17.377298] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:00.364 [2024-11-26 18:58:17.377367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.364 [2024-11-26 18:58:17.468078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.364 [2024-11-26 18:58:17.496580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.364 [2024-11-26 18:58:17.496607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.364 [2024-11-26 18:58:17.496612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.364 [2024-11-26 18:58:17.496617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.364 [2024-11-26 18:58:17.496622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.364 [2024-11-26 18:58:17.497085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.305 [2024-11-26 18:58:18.367501] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:01.305 [2024-11-26 18:58:18.367579] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:01.305 [2024-11-26 18:58:18.367602] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.305 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.566 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd -t 2000 00:08:01.566 [ 00:08:01.566 { 00:08:01.566 "name": "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd", 00:08:01.566 "aliases": [ 00:08:01.566 "lvs/lvol" 00:08:01.566 ], 00:08:01.566 "product_name": "Logical Volume", 00:08:01.566 "block_size": 4096, 00:08:01.566 "num_blocks": 38912, 00:08:01.566 "uuid": "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd", 00:08:01.566 "assigned_rate_limits": { 00:08:01.566 "rw_ios_per_sec": 0, 00:08:01.566 "rw_mbytes_per_sec": 0, 00:08:01.566 "r_mbytes_per_sec": 0, 00:08:01.566 "w_mbytes_per_sec": 0 00:08:01.566 }, 00:08:01.566 "claimed": false, 00:08:01.566 "zoned": false, 00:08:01.566 "supported_io_types": { 00:08:01.566 "read": true, 00:08:01.566 "write": true, 00:08:01.566 "unmap": true, 00:08:01.566 "flush": false, 00:08:01.566 "reset": true, 00:08:01.566 "nvme_admin": false, 00:08:01.566 "nvme_io": false, 00:08:01.566 "nvme_io_md": false, 00:08:01.566 "write_zeroes": true, 00:08:01.566 "zcopy": false, 00:08:01.566 "get_zone_info": false, 00:08:01.566 "zone_management": false, 00:08:01.566 "zone_append": false, 00:08:01.566 "compare": false, 00:08:01.566 "compare_and_write": false, 00:08:01.566 "abort": false, 00:08:01.566 "seek_hole": true, 00:08:01.566 "seek_data": true, 00:08:01.566 "copy": false, 00:08:01.566 "nvme_iov_md": false 00:08:01.566 }, 00:08:01.566 "driver_specific": { 00:08:01.566 "lvol": { 00:08:01.566 "lvol_store_uuid": "10842f6e-8cc7-47dd-8baf-ce151792153b", 00:08:01.566 "base_bdev": "aio_bdev", 00:08:01.566 "thin_provision": false, 00:08:01.566 "num_allocated_clusters": 38, 00:08:01.566 "snapshot": false, 00:08:01.566 "clone": false, 00:08:01.566 "esnap_clone": false 00:08:01.566 } 00:08:01.566 } 00:08:01.566 } 00:08:01.566 ] 00:08:01.566 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:01.566 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:01.566 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:01.826 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:01.826 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:01.826 18:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.087 [2024-11-26 18:58:19.220177] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:02.087 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:02.347 request: 00:08:02.347 { 00:08:02.347 "uuid": "10842f6e-8cc7-47dd-8baf-ce151792153b", 00:08:02.347 "method": "bdev_lvol_get_lvstores", 00:08:02.347 "req_id": 1 00:08:02.347 } 00:08:02.347 Got JSON-RPC error response 00:08:02.347 response: 00:08:02.347 { 00:08:02.347 "code": -19, 00:08:02.347 "message": "No such device" 00:08:02.347 } 00:08:02.347 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:02.347 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.347 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.347 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.347 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.608 aio_bdev 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.608 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd -t 2000 00:08:02.867 [ 00:08:02.867 { 00:08:02.867 "name": "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd", 00:08:02.867 "aliases": [ 00:08:02.867 "lvs/lvol" 00:08:02.867 ], 00:08:02.867 "product_name": "Logical Volume", 00:08:02.867 "block_size": 4096, 00:08:02.867 "num_blocks": 38912, 00:08:02.867 "uuid": "8ac79e05-fd02-48d3-88a7-9c4ba75c80bd", 00:08:02.867 "assigned_rate_limits": { 00:08:02.867 "rw_ios_per_sec": 0, 00:08:02.867 "rw_mbytes_per_sec": 0, 00:08:02.867 "r_mbytes_per_sec": 0, 00:08:02.867 "w_mbytes_per_sec": 0 00:08:02.867 }, 00:08:02.867 "claimed": false, 00:08:02.867 "zoned": false, 00:08:02.867 "supported_io_types": { 00:08:02.867 "read": true, 00:08:02.867 "write": true, 00:08:02.867 "unmap": true, 00:08:02.867 "flush": false, 00:08:02.867 "reset": true, 00:08:02.867 "nvme_admin": false, 00:08:02.867 "nvme_io": false, 00:08:02.867 "nvme_io_md": false, 00:08:02.867 "write_zeroes": true, 00:08:02.867 "zcopy": false, 00:08:02.867 "get_zone_info": false, 00:08:02.868 "zone_management": false, 00:08:02.868 "zone_append": false, 00:08:02.868 "compare": false, 00:08:02.868 "compare_and_write": false, 00:08:02.868 "abort": false, 00:08:02.868 "seek_hole": true, 00:08:02.868 "seek_data": true, 00:08:02.868 "copy": false, 00:08:02.868 "nvme_iov_md": false 00:08:02.868 }, 00:08:02.868 "driver_specific": { 00:08:02.868 "lvol": { 00:08:02.868 "lvol_store_uuid": "10842f6e-8cc7-47dd-8baf-ce151792153b", 00:08:02.868 "base_bdev": "aio_bdev", 00:08:02.868 "thin_provision": false, 00:08:02.868 "num_allocated_clusters": 38, 00:08:02.868 "snapshot": false, 00:08:02.868 "clone": false, 00:08:02.868 "esnap_clone": false 00:08:02.868 } 00:08:02.868 } 00:08:02.868 } 00:08:02.868 ] 00:08:02.868 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:02.868 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:02.868 18:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:03.127 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:03.127 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:03.127 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.386 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.386 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ac79e05-fd02-48d3-88a7-9c4ba75c80bd 00:08:03.386 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10842f6e-8cc7-47dd-8baf-ce151792153b 00:08:03.646 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.646 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.908 00:08:03.908 real 0m17.493s 00:08:03.908 user 0m45.838s 00:08:03.908 sys 0m3.191s 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.908 ************************************ 00:08:03.908 END TEST lvs_grow_dirty 00:08:03.908 ************************************ 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:03.908 nvmf_trace.0 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.908 18:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.908 rmmod nvme_tcp 00:08:03.908 rmmod nvme_fabrics 00:08:03.908 rmmod nvme_keyring 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2763194 ']' 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2763194 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2763194 ']' 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2763194 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2763194 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2763194' 00:08:03.908 killing process with pid 2763194 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2763194 00:08:03.908 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2763194 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.169 18:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.082 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.082 00:08:06.082 real 0m45.011s 00:08:06.082 user 1m7.837s 00:08:06.082 sys 0m11.004s 00:08:06.365 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.365 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.365 ************************************ 00:08:06.365 END TEST nvmf_lvs_grow 00:08:06.365 ************************************ 00:08:06.365 18:58:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.365 18:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.365 18:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.366 ************************************ 00:08:06.366 START TEST nvmf_bdev_io_wait 00:08:06.366 ************************************ 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.366 * Looking for test storage... 00:08:06.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.366 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.627 --rc genhtml_branch_coverage=1 00:08:06.627 --rc genhtml_function_coverage=1 00:08:06.627 --rc genhtml_legend=1 00:08:06.627 --rc geninfo_all_blocks=1 00:08:06.627 --rc geninfo_unexecuted_blocks=1 00:08:06.627 00:08:06.627 ' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.627 --rc genhtml_branch_coverage=1 00:08:06.627 --rc genhtml_function_coverage=1 00:08:06.627 --rc genhtml_legend=1 00:08:06.627 --rc geninfo_all_blocks=1 00:08:06.627 --rc geninfo_unexecuted_blocks=1 00:08:06.627 00:08:06.627 ' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.627 --rc genhtml_branch_coverage=1 00:08:06.627 --rc genhtml_function_coverage=1 00:08:06.627 --rc genhtml_legend=1 00:08:06.627 --rc geninfo_all_blocks=1 00:08:06.627 --rc geninfo_unexecuted_blocks=1 00:08:06.627 00:08:06.627 ' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.627 --rc genhtml_branch_coverage=1 00:08:06.627 --rc genhtml_function_coverage=1 00:08:06.627 --rc genhtml_legend=1 00:08:06.627 --rc geninfo_all_blocks=1 00:08:06.627 --rc geninfo_unexecuted_blocks=1 00:08:06.627 00:08:06.627 ' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.627 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.628 18:58:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.768 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.769 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:08:14.769 00:08:14.769 --- 10.0.0.2 ping statistics --- 00:08:14.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.769 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:14.769 00:08:14.769 --- 10.0.0.1 ping statistics --- 00:08:14.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.769 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2768274 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2768274 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2768274 ']' 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.769 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.769 [2024-11-26 18:58:31.242031] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:14.769 [2024-11-26 18:58:31.242094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.769 [2024-11-26 18:58:31.344134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.769 [2024-11-26 18:58:31.398241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.769 [2024-11-26 18:58:31.398297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.769 [2024-11-26 18:58:31.398306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.769 [2024-11-26 18:58:31.398314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.769 [2024-11-26 18:58:31.398320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.769 [2024-11-26 18:58:31.400721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.769 [2024-11-26 18:58:31.400883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.769 [2024-11-26 18:58:31.401045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.769 [2024-11-26 18:58:31.401045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.030 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 [2024-11-26 18:58:32.192723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 Malloc0 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.031 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.292 [2024-11-26 18:58:32.258538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2768346 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2768349 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.292 { 00:08:15.292 "params": { 00:08:15.292 "name": "Nvme$subsystem", 00:08:15.292 "trtype": "$TEST_TRANSPORT", 00:08:15.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.292 "adrfam": "ipv4", 00:08:15.292 "trsvcid": "$NVMF_PORT", 00:08:15.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.292 "hdgst": ${hdgst:-false}, 00:08:15.292 "ddgst": ${ddgst:-false} 00:08:15.292 }, 00:08:15.292 "method": "bdev_nvme_attach_controller" 00:08:15.292 } 00:08:15.292 EOF 00:08:15.292 )") 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2768352 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2768356 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.292 { 00:08:15.292 "params": { 00:08:15.292 "name": "Nvme$subsystem", 00:08:15.292 "trtype": "$TEST_TRANSPORT", 00:08:15.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.292 "adrfam": "ipv4", 00:08:15.292 "trsvcid": "$NVMF_PORT", 00:08:15.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.292 "hdgst": ${hdgst:-false}, 00:08:15.292 "ddgst": ${ddgst:-false} 00:08:15.292 }, 00:08:15.292 "method": "bdev_nvme_attach_controller" 00:08:15.292 } 00:08:15.292 EOF 00:08:15.292 )") 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.292 { 00:08:15.292 "params": { 00:08:15.292 "name": "Nvme$subsystem", 00:08:15.292 "trtype": "$TEST_TRANSPORT", 00:08:15.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.292 "adrfam": "ipv4", 00:08:15.292 "trsvcid": "$NVMF_PORT", 00:08:15.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.292 "hdgst": ${hdgst:-false}, 00:08:15.292 "ddgst": ${ddgst:-false} 00:08:15.292 }, 00:08:15.292 "method": "bdev_nvme_attach_controller" 00:08:15.292 } 00:08:15.292 EOF 00:08:15.292 )") 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.292 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.292 { 00:08:15.292 "params": { 00:08:15.292 "name": "Nvme$subsystem", 00:08:15.292 "trtype": "$TEST_TRANSPORT", 00:08:15.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.292 "adrfam": "ipv4", 00:08:15.292 "trsvcid": "$NVMF_PORT", 00:08:15.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.292 "hdgst": ${hdgst:-false}, 00:08:15.292 "ddgst": ${ddgst:-false} 00:08:15.293 }, 00:08:15.293 "method": "bdev_nvme_attach_controller" 00:08:15.293 } 00:08:15.293 EOF 00:08:15.293 )") 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2768346 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.293 "params": { 00:08:15.293 "name": "Nvme1", 00:08:15.293 "trtype": "tcp", 00:08:15.293 "traddr": "10.0.0.2", 00:08:15.293 "adrfam": "ipv4", 00:08:15.293 "trsvcid": "4420", 00:08:15.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.293 "hdgst": false, 00:08:15.293 "ddgst": false 00:08:15.293 }, 00:08:15.293 "method": "bdev_nvme_attach_controller" 00:08:15.293 }' 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.293 "params": { 00:08:15.293 "name": "Nvme1", 00:08:15.293 "trtype": "tcp", 00:08:15.293 "traddr": "10.0.0.2", 00:08:15.293 "adrfam": "ipv4", 00:08:15.293 "trsvcid": "4420", 00:08:15.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.293 "hdgst": false, 00:08:15.293 "ddgst": false 00:08:15.293 }, 00:08:15.293 "method": "bdev_nvme_attach_controller" 00:08:15.293 }' 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.293 "params": { 00:08:15.293 "name": "Nvme1", 00:08:15.293 "trtype": "tcp", 00:08:15.293 "traddr": "10.0.0.2", 00:08:15.293 "adrfam": "ipv4", 00:08:15.293 "trsvcid": "4420", 00:08:15.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.293 "hdgst": false, 00:08:15.293 "ddgst": false 00:08:15.293 }, 00:08:15.293 "method": "bdev_nvme_attach_controller" 00:08:15.293 }' 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.293 18:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.293 "params": { 00:08:15.293 "name": "Nvme1", 00:08:15.293 "trtype": "tcp", 00:08:15.293 "traddr": "10.0.0.2", 00:08:15.293 "adrfam": "ipv4", 00:08:15.293 "trsvcid": "4420", 00:08:15.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.293 "hdgst": false, 00:08:15.293 "ddgst": false 00:08:15.293 }, 00:08:15.293 "method": "bdev_nvme_attach_controller" 00:08:15.293 }' 00:08:15.293 [2024-11-26 18:58:32.316380] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:15.293 [2024-11-26 18:58:32.316457] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:15.293 [2024-11-26 18:58:32.321286] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:15.293 [2024-11-26 18:58:32.321356] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:15.293 [2024-11-26 18:58:32.322804] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:15.293 [2024-11-26 18:58:32.322868] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:15.293 [2024-11-26 18:58:32.330600] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:15.293 [2024-11-26 18:58:32.330677] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:15.554 [2024-11-26 18:58:32.531387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.554 [2024-11-26 18:58:32.571853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.554 [2024-11-26 18:58:32.628831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.554 [2024-11-26 18:58:32.669944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:15.554 [2024-11-26 18:58:32.695693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.554 [2024-11-26 18:58:32.734642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.814 [2024-11-26 18:58:32.767995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.814 [2024-11-26 18:58:32.806019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:15.814 Running I/O for 1 seconds... 00:08:15.814 Running I/O for 1 seconds... 00:08:16.075 Running I/O for 1 seconds... 00:08:16.075 Running I/O for 1 seconds... 00:08:17.016 7455.00 IOPS, 29.12 MiB/s 00:08:17.016 Latency(us) 00:08:17.016 [2024-11-26T17:58:34.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.016 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:17.016 Nvme1n1 : 1.02 7463.84 29.16 0.00 0.00 16990.32 7864.32 29054.29 00:08:17.016 [2024-11-26T17:58:34.229Z] =================================================================================================================== 00:08:17.016 [2024-11-26T17:58:34.230Z] Total : 7463.84 29.16 0.00 0.00 16990.32 7864.32 29054.29 00:08:17.017 11572.00 IOPS, 45.20 MiB/s 00:08:17.017 Latency(us) 00:08:17.017 [2024-11-26T17:58:34.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.017 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:17.017 Nvme1n1 : 1.01 11620.79 45.39 0.00 0.00 10969.81 6007.47 22173.01 00:08:17.017 [2024-11-26T17:58:34.230Z] =================================================================================================================== 00:08:17.017 [2024-11-26T17:58:34.230Z] Total : 11620.79 45.39 0.00 0.00 10969.81 6007.47 22173.01 00:08:17.017 7229.00 IOPS, 28.24 MiB/s [2024-11-26T17:58:34.230Z] 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2768349 00:08:17.017 00:08:17.017 Latency(us) 00:08:17.017 [2024-11-26T17:58:34.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.017 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:17.017 Nvme1n1 : 1.01 7323.30 28.61 0.00 0.00 17429.71 4068.69 40413.87 00:08:17.017 [2024-11-26T17:58:34.230Z] =================================================================================================================== 00:08:17.017 [2024-11-26T17:58:34.230Z] Total : 7323.30 28.61 0.00 0.00 17429.71 4068.69 40413.87 00:08:17.017 182880.00 IOPS, 714.38 MiB/s 00:08:17.017 Latency(us) 00:08:17.017 [2024-11-26T17:58:34.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.017 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:17.017 Nvme1n1 : 1.00 182519.13 712.97 0.00 0.00 697.29 303.79 1966.08 00:08:17.017 [2024-11-26T17:58:34.230Z] =================================================================================================================== 00:08:17.017 [2024-11-26T17:58:34.230Z] Total : 182519.13 712.97 0.00 0.00 697.29 303.79 1966.08 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2768352 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2768356 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.017 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.017 rmmod nvme_tcp 00:08:17.017 rmmod nvme_fabrics 00:08:17.277 rmmod nvme_keyring 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2768274 ']' 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2768274 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2768274 ']' 00:08:17.277 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2768274 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2768274 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2768274' 00:08:17.278 killing process with pid 2768274 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2768274 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2768274 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.278 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.823 00:08:19.823 real 0m13.184s 00:08:19.823 user 0m19.845s 00:08:19.823 sys 0m7.562s 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 ************************************ 00:08:19.823 END TEST nvmf_bdev_io_wait 00:08:19.823 ************************************ 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.823 ************************************ 00:08:19.823 START TEST nvmf_queue_depth 00:08:19.823 ************************************ 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.823 * Looking for test storage... 00:08:19.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.823 --rc genhtml_branch_coverage=1 00:08:19.823 --rc genhtml_function_coverage=1 00:08:19.823 --rc genhtml_legend=1 00:08:19.823 --rc geninfo_all_blocks=1 00:08:19.823 --rc geninfo_unexecuted_blocks=1 00:08:19.823 00:08:19.823 ' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.823 --rc genhtml_branch_coverage=1 00:08:19.823 --rc genhtml_function_coverage=1 00:08:19.823 --rc genhtml_legend=1 00:08:19.823 --rc geninfo_all_blocks=1 00:08:19.823 --rc geninfo_unexecuted_blocks=1 00:08:19.823 00:08:19.823 ' 00:08:19.823 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.823 --rc genhtml_branch_coverage=1 00:08:19.823 --rc genhtml_function_coverage=1 00:08:19.823 --rc genhtml_legend=1 00:08:19.823 --rc geninfo_all_blocks=1 00:08:19.824 --rc geninfo_unexecuted_blocks=1 00:08:19.824 00:08:19.824 ' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.824 --rc genhtml_branch_coverage=1 00:08:19.824 --rc genhtml_function_coverage=1 00:08:19.824 --rc genhtml_legend=1 00:08:19.824 --rc geninfo_all_blocks=1 00:08:19.824 --rc geninfo_unexecuted_blocks=1 00:08:19.824 00:08:19.824 ' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.824 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.974 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:27.975 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:27.975 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:27.975 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:27.975 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:08:27.975 00:08:27.975 --- 10.0.0.2 ping statistics --- 00:08:27.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.975 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:27.975 00:08:27.975 --- 10.0.0.1 ping statistics --- 00:08:27.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.975 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2773012 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2773012 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2773012 ']' 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.975 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.975 [2024-11-26 18:58:44.498139] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:27.975 [2024-11-26 18:58:44.498213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.975 [2024-11-26 18:58:44.602482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.975 [2024-11-26 18:58:44.653897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.975 [2024-11-26 18:58:44.653948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.976 [2024-11-26 18:58:44.653957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.976 [2024-11-26 18:58:44.653964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.976 [2024-11-26 18:58:44.653970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.976 [2024-11-26 18:58:44.654720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.236 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 [2024-11-26 18:58:45.363361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 Malloc0 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.237 [2024-11-26 18:58:45.424484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2773359 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2773359 /var/tmp/bdevperf.sock 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2773359 ']' 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.237 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.498 [2024-11-26 18:58:45.483332] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:08:28.498 [2024-11-26 18:58:45.483400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773359 ] 00:08:28.498 [2024-11-26 18:58:45.574956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.498 [2024-11-26 18:58:45.628226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.439 NVMe0n1 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.439 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.439 Running I/O for 10 seconds... 00:08:31.761 11264.00 IOPS, 44.00 MiB/s [2024-11-26T17:58:49.913Z] 11542.00 IOPS, 45.09 MiB/s [2024-11-26T17:58:50.855Z] 11605.33 IOPS, 45.33 MiB/s [2024-11-26T17:58:51.795Z] 11702.50 IOPS, 45.71 MiB/s [2024-11-26T17:58:52.738Z] 11942.40 IOPS, 46.65 MiB/s [2024-11-26T17:58:53.803Z] 12142.00 IOPS, 47.43 MiB/s [2024-11-26T17:58:54.766Z] 12320.29 IOPS, 48.13 MiB/s [2024-11-26T17:58:55.710Z] 12486.50 IOPS, 48.78 MiB/s [2024-11-26T17:58:57.120Z] 12626.78 IOPS, 49.32 MiB/s [2024-11-26T17:58:57.120Z] 12763.30 IOPS, 49.86 MiB/s 00:08:39.907 Latency(us) 00:08:39.907 [2024-11-26T17:58:57.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.907 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:39.907 Verification LBA range: start 0x0 length 0x4000 00:08:39.907 NVMe0n1 : 10.05 12776.60 49.91 0.00 0.00 79829.11 14964.05 60293.12 00:08:39.907 [2024-11-26T17:58:57.120Z] =================================================================================================================== 00:08:39.907 [2024-11-26T17:58:57.120Z] Total : 12776.60 49.91 0.00 0.00 79829.11 14964.05 60293.12 00:08:39.907 { 00:08:39.907 "results": [ 00:08:39.907 { 00:08:39.907 "job": "NVMe0n1", 00:08:39.907 "core_mask": "0x1", 00:08:39.907 "workload": "verify", 00:08:39.907 "status": "finished", 00:08:39.907 "verify_range": { 00:08:39.907 "start": 0, 00:08:39.907 "length": 16384 00:08:39.907 }, 00:08:39.907 "queue_depth": 1024, 00:08:39.907 "io_size": 4096, 00:08:39.907 "runtime": 10.050717, 00:08:39.907 "iops": 12776.600913148783, 00:08:39.907 "mibps": 49.90859731698743, 00:08:39.907 "io_failed": 0, 00:08:39.907 "io_timeout": 0, 00:08:39.907 "avg_latency_us": 79829.10811510688, 00:08:39.907 "min_latency_us": 14964.053333333333, 00:08:39.907 "max_latency_us": 60293.12 00:08:39.907 } 00:08:39.907 ], 00:08:39.907 "core_count": 1 00:08:39.907 } 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2773359 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2773359 ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2773359 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773359 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773359' 00:08:39.907 killing process with pid 2773359 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2773359 00:08:39.907 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.907 00:08:39.907 Latency(us) 00:08:39.907 [2024-11-26T17:58:57.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.907 [2024-11-26T17:58:57.120Z] =================================================================================================================== 00:08:39.907 [2024-11-26T17:58:57.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2773359 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.907 rmmod nvme_tcp 00:08:39.907 rmmod nvme_fabrics 00:08:39.907 rmmod nvme_keyring 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2773012 ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2773012 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2773012 ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2773012 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.907 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773012 00:08:39.907 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.907 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.907 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773012' 00:08:39.907 killing process with pid 2773012 00:08:39.907 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2773012 00:08:39.907 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2773012 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.169 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.079 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.079 00:08:42.079 real 0m22.619s 00:08:42.079 user 0m25.963s 00:08:42.079 sys 0m7.093s 00:08:42.079 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.079 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:42.079 ************************************ 00:08:42.079 END TEST nvmf_queue_depth 00:08:42.079 ************************************ 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.340 ************************************ 00:08:42.340 START TEST nvmf_target_multipath 00:08:42.340 ************************************ 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:42.340 * Looking for test storage... 00:08:42.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.340 --rc genhtml_branch_coverage=1 00:08:42.340 --rc genhtml_function_coverage=1 00:08:42.340 --rc genhtml_legend=1 00:08:42.340 --rc geninfo_all_blocks=1 00:08:42.340 --rc geninfo_unexecuted_blocks=1 00:08:42.340 00:08:42.340 ' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.340 --rc genhtml_branch_coverage=1 00:08:42.340 --rc genhtml_function_coverage=1 00:08:42.340 --rc genhtml_legend=1 00:08:42.340 --rc geninfo_all_blocks=1 00:08:42.340 --rc geninfo_unexecuted_blocks=1 00:08:42.340 00:08:42.340 ' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.340 --rc genhtml_branch_coverage=1 00:08:42.340 --rc genhtml_function_coverage=1 00:08:42.340 --rc genhtml_legend=1 00:08:42.340 --rc geninfo_all_blocks=1 00:08:42.340 --rc geninfo_unexecuted_blocks=1 00:08:42.340 00:08:42.340 ' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.340 --rc genhtml_branch_coverage=1 00:08:42.340 --rc genhtml_function_coverage=1 00:08:42.340 --rc genhtml_legend=1 00:08:42.340 --rc geninfo_all_blocks=1 00:08:42.340 --rc geninfo_unexecuted_blocks=1 00:08:42.340 00:08:42.340 ' 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.340 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.602 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.749 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.749 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.749 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.750 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:08:50.750 00:08:50.750 --- 10.0.0.2 ping statistics --- 00:08:50.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.750 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:50.750 00:08:50.750 --- 10.0.0.1 ping statistics --- 00:08:50.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.750 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.750 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:50.750 only one NIC for nvmf test 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.750 rmmod nvme_tcp 00:08:50.750 rmmod nvme_fabrics 00:08:50.750 rmmod nvme_keyring 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.750 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:52.140 00:08:52.140 real 0m9.893s 00:08:52.140 user 0m2.172s 00:08:52.140 sys 0m5.688s 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 ************************************ 00:08:52.140 END TEST nvmf_target_multipath 00:08:52.140 ************************************ 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 ************************************ 00:08:52.140 START TEST nvmf_zcopy 00:08:52.140 ************************************ 00:08:52.140 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:52.401 * Looking for test storage... 00:08:52.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.402 --rc genhtml_branch_coverage=1 00:08:52.402 --rc genhtml_function_coverage=1 00:08:52.402 --rc genhtml_legend=1 00:08:52.402 --rc geninfo_all_blocks=1 00:08:52.402 --rc geninfo_unexecuted_blocks=1 00:08:52.402 00:08:52.402 ' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.402 --rc genhtml_branch_coverage=1 00:08:52.402 --rc genhtml_function_coverage=1 00:08:52.402 --rc genhtml_legend=1 00:08:52.402 --rc geninfo_all_blocks=1 00:08:52.402 --rc geninfo_unexecuted_blocks=1 00:08:52.402 00:08:52.402 ' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.402 --rc genhtml_branch_coverage=1 00:08:52.402 --rc genhtml_function_coverage=1 00:08:52.402 --rc genhtml_legend=1 00:08:52.402 --rc geninfo_all_blocks=1 00:08:52.402 --rc geninfo_unexecuted_blocks=1 00:08:52.402 00:08:52.402 ' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.402 --rc genhtml_branch_coverage=1 00:08:52.402 --rc genhtml_function_coverage=1 00:08:52.402 --rc genhtml_legend=1 00:08:52.402 --rc geninfo_all_blocks=1 00:08:52.402 --rc geninfo_unexecuted_blocks=1 00:08:52.402 00:08:52.402 ' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.402 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.403 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.545 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:09:00.546 00:09:00.546 --- 10.0.0.2 ping statistics --- 00:09:00.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.546 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:00.546 00:09:00.546 --- 10.0.0.1 ping statistics --- 00:09:00.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.546 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.546 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.546 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2784065 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2784065 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2784065 ']' 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.547 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 [2024-11-26 18:59:17.112925] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:09:00.547 [2024-11-26 18:59:17.112990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.547 [2024-11-26 18:59:17.213170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.547 [2024-11-26 18:59:17.263548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.547 [2024-11-26 18:59:17.263598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.547 [2024-11-26 18:59:17.263606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.547 [2024-11-26 18:59:17.263614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.547 [2024-11-26 18:59:17.263620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.547 [2024-11-26 18:59:17.264417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.808 [2024-11-26 18:59:17.975932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.808 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.808 [2024-11-26 18:59:18.000212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:00.808 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 malloc0 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:01.068 { 00:09:01.068 "params": { 00:09:01.068 "name": "Nvme$subsystem", 00:09:01.068 "trtype": "$TEST_TRANSPORT", 00:09:01.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.068 "adrfam": "ipv4", 00:09:01.068 "trsvcid": "$NVMF_PORT", 00:09:01.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.068 "hdgst": ${hdgst:-false}, 00:09:01.068 "ddgst": ${ddgst:-false} 00:09:01.068 }, 00:09:01.068 "method": "bdev_nvme_attach_controller" 00:09:01.068 } 00:09:01.068 EOF 00:09:01.068 )") 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:01.068 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:01.068 "params": { 00:09:01.068 "name": "Nvme1", 00:09:01.068 "trtype": "tcp", 00:09:01.068 "traddr": "10.0.0.2", 00:09:01.068 "adrfam": "ipv4", 00:09:01.068 "trsvcid": "4420", 00:09:01.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.068 "hdgst": false, 00:09:01.068 "ddgst": false 00:09:01.068 }, 00:09:01.068 "method": "bdev_nvme_attach_controller" 00:09:01.068 }' 00:09:01.068 [2024-11-26 18:59:18.109873] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:09:01.068 [2024-11-26 18:59:18.109939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784246 ] 00:09:01.068 [2024-11-26 18:59:18.203250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.068 [2024-11-26 18:59:18.256277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.329 Running I/O for 10 seconds... 00:09:03.658 6500.00 IOPS, 50.78 MiB/s [2024-11-26T17:59:21.443Z] 6537.50 IOPS, 51.07 MiB/s [2024-11-26T17:59:22.828Z] 7258.67 IOPS, 56.71 MiB/s [2024-11-26T17:59:23.771Z] 7883.25 IOPS, 61.59 MiB/s [2024-11-26T17:59:24.713Z] 8260.60 IOPS, 64.54 MiB/s [2024-11-26T17:59:25.656Z] 8515.33 IOPS, 66.53 MiB/s [2024-11-26T17:59:26.597Z] 8699.86 IOPS, 67.97 MiB/s [2024-11-26T17:59:27.539Z] 8832.88 IOPS, 69.01 MiB/s [2024-11-26T17:59:28.481Z] 8938.67 IOPS, 69.83 MiB/s [2024-11-26T17:59:28.481Z] 9026.00 IOPS, 70.52 MiB/s 00:09:11.268 Latency(us) 00:09:11.268 [2024-11-26T17:59:28.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:11.268 Verification LBA range: start 0x0 length 0x1000 00:09:11.268 Nvme1n1 : 10.01 9029.45 70.54 0.00 0.00 14130.81 2443.95 28617.39 00:09:11.268 [2024-11-26T17:59:28.481Z] =================================================================================================================== 00:09:11.268 [2024-11-26T17:59:28.481Z] Total : 9029.45 70.54 0.00 0.00 14130.81 2443.95 28617.39 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2786358 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.528 { 00:09:11.528 "params": { 00:09:11.528 "name": "Nvme$subsystem", 00:09:11.528 "trtype": "$TEST_TRANSPORT", 00:09:11.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.528 "adrfam": "ipv4", 00:09:11.528 "trsvcid": "$NVMF_PORT", 00:09:11.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.528 "hdgst": ${hdgst:-false}, 00:09:11.528 "ddgst": ${ddgst:-false} 00:09:11.528 }, 00:09:11.528 "method": "bdev_nvme_attach_controller" 00:09:11.528 } 00:09:11.528 EOF 00:09:11.528 )") 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:11.528 [2024-11-26 18:59:28.572432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.528 [2024-11-26 18:59:28.572460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:11.528 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.528 "params": { 00:09:11.529 "name": "Nvme1", 00:09:11.529 "trtype": "tcp", 00:09:11.529 "traddr": "10.0.0.2", 00:09:11.529 "adrfam": "ipv4", 00:09:11.529 "trsvcid": "4420", 00:09:11.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.529 "hdgst": false, 00:09:11.529 "ddgst": false 00:09:11.529 }, 00:09:11.529 "method": "bdev_nvme_attach_controller" 00:09:11.529 }' 00:09:11.529 [2024-11-26 18:59:28.584433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.584442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.596464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.596472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.608495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.608503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.612317] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:09:11.529 [2024-11-26 18:59:28.612366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786358 ] 00:09:11.529 [2024-11-26 18:59:28.620526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.620534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.632558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.632565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.644587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.644595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.656618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.656625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.668648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.668655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.680677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.680684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.692706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.692714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.694361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.529 [2024-11-26 18:59:28.704739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.704748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.716770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.716780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.529 [2024-11-26 18:59:28.723692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.529 [2024-11-26 18:59:28.728798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.529 [2024-11-26 18:59:28.728807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.740835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.740847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.752863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.752875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.764892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.764902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.776922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.776932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.788954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.788961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.800997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.801014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.790 [2024-11-26 18:59:28.813020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.790 [2024-11-26 18:59:28.813029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.825053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.825064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.837081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.837090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.849112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.849119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.861142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.861149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.873176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.873184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.885205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.885214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.897234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.897240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.909266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.909273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.921298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.921308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.933330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.933338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.945360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.945368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.957391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.957398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.969423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.969431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 [2024-11-26 18:59:28.981459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.981474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.791 Running I/O for 5 seconds... 00:09:11.791 [2024-11-26 18:59:28.993492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.791 [2024-11-26 18:59:28.993499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.009017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.009034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.022515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.022531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.035454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.035469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.048931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.048950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.062326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.062341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.074795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.074810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.087863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.087878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.100737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.100751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.114313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.114327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.127897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.127912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.140803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.140817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.154111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.154125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.167469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.167483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.180789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.180804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.193351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.193365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.206839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.206853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.220545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.220560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.233751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.233766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.246924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.246938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.052 [2024-11-26 18:59:29.260304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.052 [2024-11-26 18:59:29.260319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.273513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.273528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.286919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.286934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.299466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.299486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.312413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.312428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.325153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.325172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.338532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.338546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.351956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.351971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.365079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.365093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.378389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.378404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.391496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.391511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.404926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.404941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.418316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.418330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.431225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.431239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.444899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.444914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.458353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.458368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.470942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.470956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.484342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.484356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.497324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.497338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.510767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.510782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.524075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.524088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.336 [2024-11-26 18:59:29.537483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.336 [2024-11-26 18:59:29.537497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.596 [2024-11-26 18:59:29.550360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.596 [2024-11-26 18:59:29.550379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.596 [2024-11-26 18:59:29.563692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.596 [2024-11-26 18:59:29.563707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.596 [2024-11-26 18:59:29.576666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.596 [2024-11-26 18:59:29.576681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.596 [2024-11-26 18:59:29.590114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.596 [2024-11-26 18:59:29.590128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.603297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.603311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.615999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.616013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.628632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.628647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.641034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.641049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.653766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.653781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.666460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.666474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.679729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.679743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.693048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.693062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.705878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.705892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.718885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.718899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.731390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.731405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.743811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.743826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.756790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.756805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.769837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.769853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.783286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.783301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.597 [2024-11-26 18:59:29.796756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.597 [2024-11-26 18:59:29.796775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.858 [2024-11-26 18:59:29.810078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.858 [2024-11-26 18:59:29.810093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.858 [2024-11-26 18:59:29.823107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.858 [2024-11-26 18:59:29.823121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.858 [2024-11-26 18:59:29.835961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.858 [2024-11-26 18:59:29.835976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.848505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.848521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.861145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.861164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.874239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.874254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.887223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.887238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.900926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.900940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.913606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.913621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.925972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.925987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.939462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.939477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.952922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.952937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.966057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.966072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:29.979556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.979571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 19084.00 IOPS, 149.09 MiB/s [2024-11-26T17:59:30.072Z] [2024-11-26 18:59:29.992821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:29.992836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:30.005674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:30.005691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:30.018195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:30.018211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:30.030559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:30.030574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:30.043856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:30.043871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.859 [2024-11-26 18:59:30.057151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.859 [2024-11-26 18:59:30.057173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.069688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.069703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.082392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.082407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.095848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.095863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.108804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.108820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.122279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.122294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.135362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.135376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.148959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.148974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.162629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.162644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.175471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.175486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.187740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.187755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.200600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.200615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.213858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.213872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.226956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.226970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.239872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.239887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.252939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.252954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.265354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.265368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.277912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.277926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.290556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.290571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.304276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.304291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.317463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.317478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.121 [2024-11-26 18:59:30.330669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.121 [2024-11-26 18:59:30.330683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.343739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.343754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.356942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.356957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.370210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.370225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.383464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.383479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.396022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.396036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.408396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.408411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.421178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.421193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.434199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.434214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.447362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.447376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.460674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.460689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.473253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.473267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.486925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.486939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.500306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.500320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.513099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.513114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.525805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.525819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.539622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.539636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.553272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.553286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.566622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.566637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.579812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.579827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.383 [2024-11-26 18:59:30.593088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.383 [2024-11-26 18:59:30.593102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.606900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.606916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.619478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.619493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.632909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.632924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.645982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.645997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.659552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.659566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.672937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.672951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.686385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.686399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.699949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.699963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.713503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.713517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.726775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.726789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.739788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.739802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.752300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.752315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.765820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.765834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.778569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.778587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.791827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.791841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.805374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.805388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.819063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.819079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.832270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.832285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.644 [2024-11-26 18:59:30.844564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.644 [2024-11-26 18:59:30.844578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.857303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.857318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.870670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.870684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.884270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.884284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.897348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.897361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.910038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.910053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.923109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.923124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.935671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.935686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.949441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.949455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.962419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.962433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.975639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.975654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:30.988887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:30.988902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 19172.00 IOPS, 149.78 MiB/s [2024-11-26T17:59:31.118Z] [2024-11-26 18:59:31.002133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.002148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.015300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.015315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.028416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.028434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.041008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.041023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.054229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.054243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.066652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.066666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.080514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.080529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.093839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.093853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.905 [2024-11-26 18:59:31.107265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.905 [2024-11-26 18:59:31.107280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.120876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.120891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.134304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.134318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.147958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.147973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.161397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.161411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.174414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.174428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.188140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.188155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.201222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.201236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.214646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.214660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.227252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.227267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.240590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.240604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.253517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.253531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.266705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.266720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.279225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.279244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.291824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.291838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.304336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.304350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.317889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.317903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.331371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.331386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.344811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.344825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.358026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.358040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.166 [2024-11-26 18:59:31.371763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.166 [2024-11-26 18:59:31.371777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.384068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.384082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.397068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.397082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.410642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.410656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.423433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.423447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.436417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.436432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.450163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.450178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.462634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.462649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.475975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.475990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.489274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.489288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.502369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.502384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.515903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.515917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.529153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.529173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.541966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.541980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.555400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.555415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.568496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.568511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.581147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.581166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.595154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.595174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.608868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.608883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.621703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.621718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.427 [2024-11-26 18:59:31.634351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.427 [2024-11-26 18:59:31.634366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.648088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.648103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.661557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.661572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.674465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.674480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.688008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.688023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.700784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.700799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.713712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.713726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.727224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.727239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.740541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.740556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.753953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.753968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.766789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.766804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.779521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.779536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.792731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.792746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.806020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.806036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.819253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.819269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.832269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.832284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.845211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.845226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.858698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.858713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.872012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.872027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.688 [2024-11-26 18:59:31.885504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.688 [2024-11-26 18:59:31.885519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.899069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.899084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.911762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.911777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.925145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.925164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.938718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.938733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.951120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.951135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.964549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.964564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.977522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.977537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:31.990618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:31.990633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 19216.67 IOPS, 150.13 MiB/s [2024-11-26T17:59:32.162Z] [2024-11-26 18:59:32.003715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.003730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.017461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.017482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.030410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.030425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.043205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.043220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.056118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.056133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.069123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.069138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.082625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.082640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.095633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.095648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.108530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.108545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.122133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.949 [2024-11-26 18:59:32.122148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.949 [2024-11-26 18:59:32.135329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.950 [2024-11-26 18:59:32.135344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.950 [2024-11-26 18:59:32.148772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.950 [2024-11-26 18:59:32.148786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.162250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.162265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.175253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.175267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.187857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.187871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.201171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.201185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.214607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.214621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.227992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.228006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.240928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.240942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.253712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.253726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.266483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.266501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.279589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.279603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.293212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.293227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.306482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.306497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.319236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.319250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.332468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.332483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.345601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.345615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.358222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.358236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.370664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.370678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.383773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.383787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.397076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.397090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.210 [2024-11-26 18:59:32.410783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.210 [2024-11-26 18:59:32.410798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.424217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.424231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.436842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.436856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.450752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.450766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.463391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.463406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.477057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.477071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.489989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.490003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.502893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.502907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.516465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.516483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.529544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.529559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.542238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.542253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.554719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.554734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.567502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.567516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.579894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.579908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.593458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.593472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.606606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.606621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.619934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.619948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.633409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.633424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.646947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.646961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.659747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.659761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.471 [2024-11-26 18:59:32.672873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.471 [2024-11-26 18:59:32.672887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.686499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.686514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.699841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.699856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.713214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.713228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.726470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.726485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.739591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.739605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.752263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.752278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.765049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.765068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.777836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.777850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.790972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.790986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.804384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.804398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.817821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.817836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.830578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.830593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.843010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.843025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.856911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.856925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.869981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.869995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.883507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.883522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.896612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.896626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.909945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.909959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.923361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.923376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.731 [2024-11-26 18:59:32.936465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.731 [2024-11-26 18:59:32.936480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 [2024-11-26 18:59:32.949658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:32.949672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 [2024-11-26 18:59:32.962003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:32.962017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 [2024-11-26 18:59:32.975277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:32.975291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 [2024-11-26 18:59:32.988509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:32.988523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 19236.25 IOPS, 150.28 MiB/s [2024-11-26T17:59:33.205Z] [2024-11-26 18:59:33.000942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:33.000956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.992 [2024-11-26 18:59:33.014093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.992 [2024-11-26 18:59:33.014108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.026407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.026421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.039002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.039016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.051441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.051456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.064580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.064595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.078009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.078023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.090779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.090794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.103384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.103399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.116000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.116014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.129120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.129135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.142795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.142810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.156339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.156354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.169885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.169901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.182873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.182888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.993 [2024-11-26 18:59:33.196427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.993 [2024-11-26 18:59:33.196442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.209847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.209862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.223044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.223058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.236416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.236432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.249805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.249820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.263210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.263225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.276097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.276112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.289156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.289175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.302579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.302593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.314936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.314951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.328294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.328309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.340495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.340510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.353888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.353903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.366594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.366609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.254 [2024-11-26 18:59:33.379206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.254 [2024-11-26 18:59:33.379221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.391570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.391584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.405308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.405322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.417854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.417869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.430452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.430466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.442863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.442877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.255 [2024-11-26 18:59:33.456385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.255 [2024-11-26 18:59:33.456400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.469311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.469325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.481810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.481825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.494356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.494370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.506596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.506611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.519442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.519456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.532780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.532795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.545833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.545848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.558223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.558238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.570561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.570576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.582927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.582941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.596450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.596465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.609046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.609060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.621634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.621649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.634485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.634499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.646846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.646861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.659681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.659696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.672333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.672348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.684922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.684937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.698227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.698242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.710801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.710815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.516 [2024-11-26 18:59:33.723813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.516 [2024-11-26 18:59:33.723828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.737101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.737124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.750113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.750128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.762745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.762760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.775067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.775081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.787865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.787881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.800842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.800857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.813567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.813582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.827164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.827180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.840635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.840650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.854025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.854040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.867133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.867148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.880974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.880989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.893674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.893689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.906993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.907008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.919561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.919575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.933346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.933360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.945969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.945984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.958387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.958401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.971445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.971460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.777 [2024-11-26 18:59:33.984831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.777 [2024-11-26 18:59:33.984849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:33.998140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:33.998154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 19251.40 IOPS, 150.40 MiB/s 00:09:17.038 Latency(us) 00:09:17.038 [2024-11-26T17:59:34.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.038 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:17.038 Nvme1n1 : 5.01 19253.50 150.42 0.00 0.00 6642.78 2580.48 18896.21 00:09:17.038 [2024-11-26T17:59:34.251Z] =================================================================================================================== 00:09:17.038 [2024-11-26T17:59:34.251Z] Total : 19253.50 150.42 0.00 0.00 6642.78 2580.48 18896.21 00:09:17.038 [2024-11-26 18:59:34.007981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.007996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.020010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.020023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.032046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.032058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.044072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.044084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.056102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.056111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.068129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.068139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.080162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.080173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.092195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.092208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 [2024-11-26 18:59:34.104225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.038 [2024-11-26 18:59:34.104235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2786358) - No such process 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2786358 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.038 delay0 00:09:17.038 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.039 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:17.039 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.039 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.039 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.039 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:17.299 [2024-11-26 18:59:34.273383] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:23.881 Initializing NVMe Controllers 00:09:23.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:23.881 Initialization complete. Launching workers. 00:09:23.881 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 302, failed: 12692 00:09:23.881 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12935, failed to submit 59 00:09:23.881 success 12815, unsuccessful 120, failed 0 00:09:23.881 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:23.881 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:23.881 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.881 rmmod nvme_tcp 00:09:23.881 rmmod nvme_fabrics 00:09:23.881 rmmod nvme_keyring 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2784065 ']' 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2784065 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2784065 ']' 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2784065 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.881 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2784065 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2784065' 00:09:24.141 killing process with pid 2784065 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2784065 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2784065 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.141 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.685 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.685 00:09:26.685 real 0m34.001s 00:09:26.685 user 0m45.080s 00:09:26.685 sys 0m11.355s 00:09:26.685 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.685 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.685 ************************************ 00:09:26.685 END TEST nvmf_zcopy 00:09:26.685 ************************************ 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.686 ************************************ 00:09:26.686 START TEST nvmf_nmic 00:09:26.686 ************************************ 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.686 * Looking for test storage... 00:09:26.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.686 --rc genhtml_branch_coverage=1 00:09:26.686 --rc genhtml_function_coverage=1 00:09:26.686 --rc genhtml_legend=1 00:09:26.686 --rc geninfo_all_blocks=1 00:09:26.686 --rc geninfo_unexecuted_blocks=1 00:09:26.686 00:09:26.686 ' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.686 --rc genhtml_branch_coverage=1 00:09:26.686 --rc genhtml_function_coverage=1 00:09:26.686 --rc genhtml_legend=1 00:09:26.686 --rc geninfo_all_blocks=1 00:09:26.686 --rc geninfo_unexecuted_blocks=1 00:09:26.686 00:09:26.686 ' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.686 --rc genhtml_branch_coverage=1 00:09:26.686 --rc genhtml_function_coverage=1 00:09:26.686 --rc genhtml_legend=1 00:09:26.686 --rc geninfo_all_blocks=1 00:09:26.686 --rc geninfo_unexecuted_blocks=1 00:09:26.686 00:09:26.686 ' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.686 --rc genhtml_branch_coverage=1 00:09:26.686 --rc genhtml_function_coverage=1 00:09:26.686 --rc genhtml_legend=1 00:09:26.686 --rc geninfo_all_blocks=1 00:09:26.686 --rc geninfo_unexecuted_blocks=1 00:09:26.686 00:09:26.686 ' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.686 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.687 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:34.842 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:34.842 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:34.842 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:34.842 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.842 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.842 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.842 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:09:34.843 00:09:34.843 --- 10.0.0.2 ping statistics --- 00:09:34.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.843 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:09:34.843 00:09:34.843 --- 10.0.0.1 ping statistics --- 00:09:34.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.843 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2793085 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2793085 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2793085 ']' 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.843 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.843 [2024-11-26 18:59:51.247037] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:09:34.843 [2024-11-26 18:59:51.247105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.843 [2024-11-26 18:59:51.349677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.843 [2024-11-26 18:59:51.404527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.843 [2024-11-26 18:59:51.404578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.843 [2024-11-26 18:59:51.404587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.843 [2024-11-26 18:59:51.404595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.843 [2024-11-26 18:59:51.404601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.843 [2024-11-26 18:59:51.406678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.843 [2024-11-26 18:59:51.406840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.843 [2024-11-26 18:59:51.407003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.843 [2024-11-26 18:59:51.407003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.105 [2024-11-26 18:59:52.128717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.105 Malloc0 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.105 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 [2024-11-26 18:59:52.202916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:35.106 test case1: single bdev can't be used in multiple subsystems 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 [2024-11-26 18:59:52.238771] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:35.106 [2024-11-26 18:59:52.238798] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:35.106 [2024-11-26 18:59:52.238807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.106 request: 00:09:35.106 { 00:09:35.106 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:35.106 "namespace": { 00:09:35.106 "bdev_name": "Malloc0", 00:09:35.106 "no_auto_visible": false, 00:09:35.106 "hide_metadata": false 00:09:35.106 }, 00:09:35.106 "method": "nvmf_subsystem_add_ns", 00:09:35.106 "req_id": 1 00:09:35.106 } 00:09:35.106 Got JSON-RPC error response 00:09:35.106 response: 00:09:35.106 { 00:09:35.106 "code": -32602, 00:09:35.106 "message": "Invalid parameters" 00:09:35.106 } 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:35.106 Adding namespace failed - expected result. 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:35.106 test case2: host connect to nvmf target in multiple paths 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.106 [2024-11-26 18:59:52.250977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.106 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.026 18:59:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:38.485 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.485 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:38.485 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.485 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:38.485 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:40.427 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.427 [global] 00:09:40.427 thread=1 00:09:40.427 invalidate=1 00:09:40.427 rw=write 00:09:40.427 time_based=1 00:09:40.427 runtime=1 00:09:40.427 ioengine=libaio 00:09:40.427 direct=1 00:09:40.427 bs=4096 00:09:40.427 iodepth=1 00:09:40.427 norandommap=0 00:09:40.427 numjobs=1 00:09:40.427 00:09:40.427 verify_dump=1 00:09:40.427 verify_backlog=512 00:09:40.427 verify_state_save=0 00:09:40.427 do_verify=1 00:09:40.427 verify=crc32c-intel 00:09:40.427 [job0] 00:09:40.427 filename=/dev/nvme0n1 00:09:40.427 Could not set queue depth (nvme0n1) 00:09:40.689 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.689 fio-3.35 00:09:40.689 Starting 1 thread 00:09:42.074 00:09:42.074 job0: (groupid=0, jobs=1): err= 0: pid=2794436: Tue Nov 26 18:59:58 2024 00:09:42.074 read: IOPS=16, BW=67.1KiB/s (68.7kB/s)(68.0KiB/1013msec) 00:09:42.074 slat (nsec): min=25144, max=25901, avg=25566.82, stdev=189.59 00:09:42.074 clat (usec): min=1074, max=42981, avg=39704.40, stdev=9970.25 00:09:42.074 lat (usec): min=1100, max=43007, avg=39729.97, stdev=9970.20 00:09:42.074 clat percentiles (usec): 00:09:42.074 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:09:42.074 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:42.074 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:09:42.074 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:42.074 | 99.99th=[42730] 00:09:42.074 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:42.074 slat (nsec): min=9487, max=78130, avg=30473.49, stdev=8526.27 00:09:42.074 clat (usec): min=236, max=835, avg=621.75, stdev=112.02 00:09:42.074 lat (usec): min=249, max=868, avg=652.22, stdev=114.78 00:09:42.074 clat percentiles (usec): 00:09:42.074 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 457], 20.00th=[ 529], 00:09:42.074 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 676], 00:09:42.074 | 70.00th=[ 701], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 758], 00:09:42.074 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 832], 99.95th=[ 832], 00:09:42.074 | 99.99th=[ 832] 00:09:42.074 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.074 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.074 lat (usec) : 250=0.19%, 500=14.37%, 750=74.86%, 1000=7.37% 00:09:42.074 lat (msec) : 2=0.19%, 50=3.02% 00:09:42.074 cpu : usr=1.09%, sys=1.19%, ctx=529, majf=0, minf=1 00:09:42.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.074 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.074 00:09:42.074 Run status group 0 (all jobs): 00:09:42.074 READ: bw=67.1KiB/s (68.7kB/s), 67.1KiB/s-67.1KiB/s (68.7kB/s-68.7kB/s), io=68.0KiB (69.6kB), run=1013-1013msec 00:09:42.074 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:09:42.074 00:09:42.074 Disk stats (read/write): 00:09:42.075 nvme0n1: ios=64/512, merge=0/0, ticks=594/311, in_queue=905, util=93.29% 00:09:42.075 18:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.075 rmmod nvme_tcp 00:09:42.075 rmmod nvme_fabrics 00:09:42.075 rmmod nvme_keyring 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2793085 ']' 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2793085 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2793085 ']' 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2793085 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793085 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793085' 00:09:42.075 killing process with pid 2793085 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2793085 00:09:42.075 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2793085 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.335 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.249 00:09:44.249 real 0m17.990s 00:09:44.249 user 0m48.000s 00:09:44.249 sys 0m6.616s 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.249 ************************************ 00:09:44.249 END TEST nvmf_nmic 00:09:44.249 ************************************ 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.249 19:00:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.510 ************************************ 00:09:44.510 START TEST nvmf_fio_target 00:09:44.510 ************************************ 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.510 * Looking for test storage... 00:09:44.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.510 --rc genhtml_branch_coverage=1 00:09:44.510 --rc genhtml_function_coverage=1 00:09:44.510 --rc genhtml_legend=1 00:09:44.510 --rc geninfo_all_blocks=1 00:09:44.510 --rc geninfo_unexecuted_blocks=1 00:09:44.510 00:09:44.510 ' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.510 --rc genhtml_branch_coverage=1 00:09:44.510 --rc genhtml_function_coverage=1 00:09:44.510 --rc genhtml_legend=1 00:09:44.510 --rc geninfo_all_blocks=1 00:09:44.510 --rc geninfo_unexecuted_blocks=1 00:09:44.510 00:09:44.510 ' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.510 --rc genhtml_branch_coverage=1 00:09:44.510 --rc genhtml_function_coverage=1 00:09:44.510 --rc genhtml_legend=1 00:09:44.510 --rc geninfo_all_blocks=1 00:09:44.510 --rc geninfo_unexecuted_blocks=1 00:09:44.510 00:09:44.510 ' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.510 --rc genhtml_branch_coverage=1 00:09:44.510 --rc genhtml_function_coverage=1 00:09:44.510 --rc genhtml_legend=1 00:09:44.510 --rc geninfo_all_blocks=1 00:09:44.510 --rc geninfo_unexecuted_blocks=1 00:09:44.510 00:09:44.510 ' 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:44.510 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.511 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:52.661 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:52.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.661 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:52.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:52.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.662 19:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:09:52.662 00:09:52.662 --- 10.0.0.2 ping statistics --- 00:09:52.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.662 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:09:52.662 00:09:52.662 --- 10.0.0.1 ping statistics --- 00:09:52.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.662 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2799225 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2799225 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2799225 ']' 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.662 19:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.662 [2024-11-26 19:00:09.272707] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:09:52.662 [2024-11-26 19:00:09.272776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.662 [2024-11-26 19:00:09.373005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.662 [2024-11-26 19:00:09.427064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.662 [2024-11-26 19:00:09.427120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.662 [2024-11-26 19:00:09.427129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.663 [2024-11-26 19:00:09.427136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.663 [2024-11-26 19:00:09.427142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.663 [2024-11-26 19:00:09.429447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.663 [2024-11-26 19:00:09.429608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.663 [2024-11-26 19:00:09.429771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.663 [2024-11-26 19:00:09.429772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.924 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:53.186 [2024-11-26 19:00:10.300259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.186 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.447 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:53.447 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.708 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:53.708 19:00:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.970 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:53.970 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.231 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:54.231 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:54.231 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.492 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:54.492 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.754 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:54.754 19:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.015 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:55.015 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:55.015 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.275 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.275 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.536 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.536 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:55.536 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.796 [2024-11-26 19:00:12.897876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.796 19:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:56.056 19:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:56.316 19:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:57.698 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:59.614 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:59.876 [global] 00:09:59.876 thread=1 00:09:59.876 invalidate=1 00:09:59.876 rw=write 00:09:59.876 time_based=1 00:09:59.876 runtime=1 00:09:59.876 ioengine=libaio 00:09:59.876 direct=1 00:09:59.876 bs=4096 00:09:59.876 iodepth=1 00:09:59.876 norandommap=0 00:09:59.876 numjobs=1 00:09:59.876 00:09:59.876 verify_dump=1 00:09:59.876 verify_backlog=512 00:09:59.876 verify_state_save=0 00:09:59.876 do_verify=1 00:09:59.876 verify=crc32c-intel 00:09:59.876 [job0] 00:09:59.876 filename=/dev/nvme0n1 00:09:59.876 [job1] 00:09:59.876 filename=/dev/nvme0n2 00:09:59.876 [job2] 00:09:59.876 filename=/dev/nvme0n3 00:09:59.876 [job3] 00:09:59.876 filename=/dev/nvme0n4 00:09:59.876 Could not set queue depth (nvme0n1) 00:09:59.876 Could not set queue depth (nvme0n2) 00:09:59.876 Could not set queue depth (nvme0n3) 00:09:59.876 Could not set queue depth (nvme0n4) 00:10:00.137 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.137 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.137 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.137 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.137 fio-3.35 00:10:00.137 Starting 4 threads 00:10:01.522 00:10:01.522 job0: (groupid=0, jobs=1): err= 0: pid=2801434: Tue Nov 26 19:00:18 2024 00:10:01.522 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.522 slat (nsec): min=24974, max=43012, avg=25775.35, stdev=1685.90 00:10:01.522 clat (usec): min=606, max=1225, avg=951.89, stdev=101.25 00:10:01.522 lat (usec): min=632, max=1251, avg=977.67, stdev=101.06 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 701], 5.00th=[ 775], 10.00th=[ 824], 20.00th=[ 865], 00:10:01.522 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 963], 60.00th=[ 988], 00:10:01.522 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:01.522 | 99.00th=[ 1188], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:01.522 | 99.99th=[ 1221] 00:10:01.522 write: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec); 0 zone resets 00:10:01.522 slat (usec): min=10, max=10639, avg=46.02, stdev=385.89 00:10:01.522 clat (usec): min=181, max=987, avg=598.85, stdev=134.86 00:10:01.522 lat (usec): min=216, max=11295, avg=644.87, stdev=411.41 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 285], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 474], 00:10:01.522 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:10:01.522 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 807], 00:10:01.522 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:10:01.522 | 99.99th=[ 988] 00:10:01.522 bw ( KiB/s): min= 4096, max= 4096, per=37.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.522 lat (usec) : 250=0.16%, 500=13.96%, 750=39.27%, 1000=32.26% 00:10:01.522 lat (msec) : 2=14.35% 00:10:01.522 cpu : usr=2.30%, sys=3.50%, ctx=1272, majf=0, minf=1 00:10:01.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 issued rwts: total=512,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.522 job1: (groupid=0, jobs=1): err= 0: pid=2801449: Tue Nov 26 19:00:18 2024 00:10:01.522 read: IOPS=611, BW=2447KiB/s (2505kB/s)(2520KiB/1030msec) 00:10:01.522 slat (nsec): min=7047, max=55886, avg=25011.66, stdev=7440.58 00:10:01.522 clat (usec): min=334, max=41052, avg=831.16, stdev=1608.55 00:10:01.522 lat (usec): min=342, max=41079, avg=856.17, stdev=1608.74 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 510], 5.00th=[ 570], 10.00th=[ 619], 20.00th=[ 676], 00:10:01.522 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 807], 00:10:01.522 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 930], 00:10:01.522 | 99.00th=[ 979], 99.50th=[ 1037], 99.90th=[41157], 99.95th=[41157], 00:10:01.522 | 99.99th=[41157] 00:10:01.522 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:10:01.522 slat (nsec): min=9903, max=73478, avg=31390.02, stdev=10375.15 00:10:01.522 clat (usec): min=119, max=758, avg=432.90, stdev=99.16 00:10:01.522 lat (usec): min=131, max=796, avg=464.29, stdev=102.44 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 219], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 347], 00:10:01.522 | 30.00th=[ 379], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 465], 00:10:01.522 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 586], 00:10:01.522 | 99.00th=[ 660], 99.50th=[ 717], 99.90th=[ 758], 99.95th=[ 758], 00:10:01.522 | 99.99th=[ 758] 00:10:01.522 bw ( KiB/s): min= 4096, max= 4096, per=37.87%, avg=4096.00, stdev= 0.00, samples=2 00:10:01.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:01.522 lat (usec) : 250=1.93%, 500=43.83%, 750=31.02%, 1000=22.97% 00:10:01.522 lat (msec) : 2=0.18%, 50=0.06% 00:10:01.522 cpu : usr=2.92%, sys=4.08%, ctx=1655, majf=0, minf=1 00:10:01.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 issued rwts: total=630,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.522 job2: (groupid=0, jobs=1): err= 0: pid=2801468: Tue Nov 26 19:00:18 2024 00:10:01.522 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:10:01.522 slat (nsec): min=26749, max=27239, avg=26958.12, stdev=158.12 00:10:01.522 clat (usec): min=41865, max=43060, avg=42281.43, stdev=470.08 00:10:01.522 lat (usec): min=41892, max=43086, avg=42308.38, stdev=470.05 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:01.522 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:01.522 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:10:01.522 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:01.522 | 99.99th=[43254] 00:10:01.522 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:10:01.522 slat (nsec): min=9322, max=91850, avg=29199.84, stdev=10672.67 00:10:01.522 clat (usec): min=176, max=923, avg=585.24, stdev=118.80 00:10:01.522 lat (usec): min=213, max=937, avg=614.44, stdev=123.72 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 289], 5.00th=[ 355], 10.00th=[ 433], 20.00th=[ 478], 00:10:01.522 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:10:01.522 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:10:01.522 | 99.00th=[ 807], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 922], 00:10:01.522 | 99.99th=[ 922] 00:10:01.522 bw ( KiB/s): min= 4096, max= 4096, per=37.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.522 lat (usec) : 250=0.19%, 500=23.44%, 750=68.24%, 1000=4.91% 00:10:01.522 lat (msec) : 50=3.21% 00:10:01.522 cpu : usr=0.97%, sys=1.83%, ctx=530, majf=0, minf=2 00:10:01.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.522 job3: (groupid=0, jobs=1): err= 0: pid=2801475: Tue Nov 26 19:00:18 2024 00:10:01.522 read: IOPS=19, BW=77.6KiB/s (79.5kB/s)(80.0KiB/1031msec) 00:10:01.522 slat (nsec): min=22591, max=28071, avg=27457.80, stdev=1154.77 00:10:01.522 clat (usec): min=746, max=42213, avg=39357.44, stdev=9252.67 00:10:01.522 lat (usec): min=773, max=42241, avg=39384.90, stdev=9252.63 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 750], 5.00th=[ 750], 10.00th=[34341], 20.00th=[41157], 00:10:01.522 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:01.522 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:01.522 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:01.522 | 99.99th=[42206] 00:10:01.522 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:01.522 slat (usec): min=10, max=12182, avg=54.19, stdev=537.20 00:10:01.522 clat (usec): min=133, max=626, avg=412.13, stdev=95.44 00:10:01.522 lat (usec): min=170, max=12668, avg=466.31, stdev=549.78 00:10:01.522 clat percentiles (usec): 00:10:01.522 | 1.00th=[ 225], 5.00th=[ 262], 10.00th=[ 281], 20.00th=[ 322], 00:10:01.522 | 30.00th=[ 355], 40.00th=[ 383], 50.00th=[ 408], 60.00th=[ 453], 00:10:01.522 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:10:01.522 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 627], 99.95th=[ 627], 00:10:01.522 | 99.99th=[ 627] 00:10:01.522 bw ( KiB/s): min= 4096, max= 4096, per=37.87%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.522 lat (usec) : 250=3.38%, 500=72.74%, 750=20.30% 00:10:01.522 lat (msec) : 50=3.57% 00:10:01.522 cpu : usr=0.68%, sys=1.46%, ctx=537, majf=0, minf=1 00:10:01.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.522 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.522 00:10:01.522 Run status group 0 (all jobs): 00:10:01.522 READ: bw=4548KiB/s (4657kB/s), 65.6KiB/s-2447KiB/s (67.1kB/s-2505kB/s), io=4716KiB (4829kB), run=1001-1037msec 00:10:01.522 WRITE: bw=10.6MiB/s (11.1MB/s), 1975KiB/s-3977KiB/s (2022kB/s-4072kB/s), io=11.0MiB (11.5MB), run=1001-1037msec 00:10:01.522 00:10:01.523 Disk stats (read/write): 00:10:01.523 nvme0n1: ios=511/512, merge=0/0, ticks=1295/300, in_queue=1595, util=83.77% 00:10:01.523 nvme0n2: ios=534/886, merge=0/0, ticks=1261/364, in_queue=1625, util=87.84% 00:10:01.523 nvme0n3: ios=69/512, merge=0/0, ticks=616/241, in_queue=857, util=94.93% 00:10:01.523 nvme0n4: ios=63/512, merge=0/0, ticks=840/209, in_queue=1049, util=96.36% 00:10:01.523 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:01.523 [global] 00:10:01.523 thread=1 00:10:01.523 invalidate=1 00:10:01.523 rw=randwrite 00:10:01.523 time_based=1 00:10:01.523 runtime=1 00:10:01.523 ioengine=libaio 00:10:01.523 direct=1 00:10:01.523 bs=4096 00:10:01.523 iodepth=1 00:10:01.523 norandommap=0 00:10:01.523 numjobs=1 00:10:01.523 00:10:01.523 verify_dump=1 00:10:01.523 verify_backlog=512 00:10:01.523 verify_state_save=0 00:10:01.523 do_verify=1 00:10:01.523 verify=crc32c-intel 00:10:01.523 [job0] 00:10:01.523 filename=/dev/nvme0n1 00:10:01.523 [job1] 00:10:01.523 filename=/dev/nvme0n2 00:10:01.523 [job2] 00:10:01.523 filename=/dev/nvme0n3 00:10:01.523 [job3] 00:10:01.523 filename=/dev/nvme0n4 00:10:01.523 Could not set queue depth (nvme0n1) 00:10:01.523 Could not set queue depth (nvme0n2) 00:10:01.523 Could not set queue depth (nvme0n3) 00:10:01.523 Could not set queue depth (nvme0n4) 00:10:01.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.783 fio-3.35 00:10:01.783 Starting 4 threads 00:10:03.167 00:10:03.167 job0: (groupid=0, jobs=1): err= 0: pid=2801933: Tue Nov 26 19:00:20 2024 00:10:03.167 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1040msec) 00:10:03.167 slat (nsec): min=25379, max=26297, avg=25648.82, stdev=257.85 00:10:03.167 clat (usec): min=41817, max=42928, avg=42087.24, stdev=318.85 00:10:03.167 lat (usec): min=41842, max=42954, avg=42112.89, stdev=318.81 00:10:03.167 clat percentiles (usec): 00:10:03.167 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:03.167 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:03.167 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:03.167 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:03.167 | 99.99th=[42730] 00:10:03.167 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:03.167 slat (nsec): min=9693, max=51784, avg=30043.47, stdev=8987.69 00:10:03.167 clat (usec): min=213, max=965, avg=593.00, stdev=119.11 00:10:03.167 lat (usec): min=223, max=998, avg=623.05, stdev=122.46 00:10:03.167 clat percentiles (usec): 00:10:03.167 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 486], 00:10:03.167 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:10:03.167 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:10:03.167 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 963], 99.95th=[ 963], 00:10:03.167 | 99.99th=[ 963] 00:10:03.167 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.167 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.167 lat (usec) : 250=0.19%, 500=24.39%, 750=64.65%, 1000=7.56% 00:10:03.167 lat (msec) : 50=3.21% 00:10:03.167 cpu : usr=0.87%, sys=1.35%, ctx=531, majf=0, minf=1 00:10:03.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.168 job1: (groupid=0, jobs=1): err= 0: pid=2801952: Tue Nov 26 19:00:20 2024 00:10:03.168 read: IOPS=18, BW=73.7KiB/s (75.5kB/s)(76.0KiB/1031msec) 00:10:03.168 slat (nsec): min=25660, max=27132, avg=25990.53, stdev=350.10 00:10:03.168 clat (usec): min=927, max=42727, avg=39792.94, stdev=9416.18 00:10:03.168 lat (usec): min=953, max=42753, avg=39818.93, stdev=9416.19 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 930], 5.00th=[ 930], 10.00th=[41157], 20.00th=[41681], 00:10:03.168 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:03.168 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:03.168 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:03.168 | 99.99th=[42730] 00:10:03.168 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:03.168 slat (nsec): min=9435, max=65649, avg=30287.08, stdev=9000.95 00:10:03.168 clat (usec): min=129, max=761, avg=497.24, stdev=125.58 00:10:03.168 lat (usec): min=140, max=794, avg=527.52, stdev=128.24 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 186], 5.00th=[ 269], 10.00th=[ 347], 20.00th=[ 392], 00:10:03.168 | 30.00th=[ 424], 40.00th=[ 478], 50.00th=[ 502], 60.00th=[ 529], 00:10:03.168 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 693], 00:10:03.168 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 758], 99.95th=[ 758], 00:10:03.168 | 99.99th=[ 758] 00:10:03.168 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.168 lat (usec) : 250=3.39%, 500=43.88%, 750=48.59%, 1000=0.75% 00:10:03.168 lat (msec) : 50=3.39% 00:10:03.168 cpu : usr=0.78%, sys=1.55%, ctx=531, majf=0, minf=2 00:10:03.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.168 job2: (groupid=0, jobs=1): err= 0: pid=2801973: Tue Nov 26 19:00:20 2024 00:10:03.168 read: IOPS=223, BW=895KiB/s (917kB/s)(896KiB/1001msec) 00:10:03.168 slat (nsec): min=12926, max=64000, avg=26635.07, stdev=4925.94 00:10:03.168 clat (usec): min=456, max=42577, avg=3053.98, stdev=8871.27 00:10:03.168 lat (usec): min=482, max=42603, avg=3080.62, stdev=8871.13 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 734], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 971], 00:10:03.168 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1090], 00:10:03.168 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1237], 00:10:03.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:10:03.168 | 99.99th=[42730] 00:10:03.168 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:03.168 slat (nsec): min=9332, max=80802, avg=28004.06, stdev=10262.80 00:10:03.168 clat (usec): min=136, max=974, avg=567.04, stdev=142.88 00:10:03.168 lat (usec): min=146, max=1006, avg=595.04, stdev=147.94 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 247], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 441], 00:10:03.168 | 30.00th=[ 490], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:10:03.168 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 791], 00:10:03.168 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:10:03.168 | 99.99th=[ 971] 00:10:03.168 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.168 lat (usec) : 250=0.82%, 500=21.20%, 750=41.98%, 1000=15.08% 00:10:03.168 lat (msec) : 2=19.43%, 50=1.49% 00:10:03.168 cpu : usr=1.00%, sys=2.10%, ctx=736, majf=0, minf=2 00:10:03.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 issued rwts: total=224,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.168 job3: (groupid=0, jobs=1): err= 0: pid=2801980: Tue Nov 26 19:00:20 2024 00:10:03.168 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:10:03.168 slat (nsec): min=25921, max=27416, avg=26330.58, stdev=310.62 00:10:03.168 clat (usec): min=8103, max=42042, avg=39890.83, stdev=7709.59 00:10:03.168 lat (usec): min=8129, max=42068, avg=39917.17, stdev=7709.61 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 8094], 5.00th=[ 8094], 10.00th=[41157], 20.00th=[41157], 00:10:03.168 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:03.168 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:03.168 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:03.168 | 99.99th=[42206] 00:10:03.168 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:03.168 slat (nsec): min=9462, max=62121, avg=28028.63, stdev=9666.50 00:10:03.168 clat (usec): min=113, max=746, avg=440.06, stdev=102.88 00:10:03.168 lat (usec): min=123, max=777, avg=468.09, stdev=107.40 00:10:03.168 clat percentiles (usec): 00:10:03.168 | 1.00th=[ 176], 5.00th=[ 269], 10.00th=[ 297], 20.00th=[ 351], 00:10:03.168 | 30.00th=[ 392], 40.00th=[ 429], 50.00th=[ 453], 60.00th=[ 474], 00:10:03.168 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 562], 95.00th=[ 603], 00:10:03.168 | 99.00th=[ 660], 99.50th=[ 709], 99.90th=[ 750], 99.95th=[ 750], 00:10:03.168 | 99.99th=[ 750] 00:10:03.168 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.168 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.168 lat (usec) : 250=3.39%, 500=64.78%, 750=28.25% 00:10:03.168 lat (msec) : 10=0.19%, 50=3.39% 00:10:03.168 cpu : usr=0.90%, sys=1.30%, ctx=531, majf=0, minf=2 00:10:03.168 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.168 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.168 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.168 00:10:03.168 Run status group 0 (all jobs): 00:10:03.168 READ: bw=1073KiB/s (1099kB/s), 65.4KiB/s-895KiB/s (67.0kB/s-917kB/s), io=1116KiB (1143kB), run=1001-1040msec 00:10:03.168 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2046KiB/s (2016kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1040msec 00:10:03.168 00:10:03.168 Disk stats (read/write): 00:10:03.168 nvme0n1: ios=37/512, merge=0/0, ticks=1468/287, in_queue=1755, util=96.69% 00:10:03.168 nvme0n2: ios=46/512, merge=0/0, ticks=581/219, in_queue=800, util=86.63% 00:10:03.168 nvme0n3: ios=127/512, merge=0/0, ticks=505/266, in_queue=771, util=88.37% 00:10:03.168 nvme0n4: ios=54/512, merge=0/0, ticks=716/219, in_queue=935, util=95.94% 00:10:03.168 19:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:03.168 [global] 00:10:03.168 thread=1 00:10:03.168 invalidate=1 00:10:03.168 rw=write 00:10:03.168 time_based=1 00:10:03.168 runtime=1 00:10:03.168 ioengine=libaio 00:10:03.168 direct=1 00:10:03.168 bs=4096 00:10:03.168 iodepth=128 00:10:03.168 norandommap=0 00:10:03.168 numjobs=1 00:10:03.168 00:10:03.168 verify_dump=1 00:10:03.168 verify_backlog=512 00:10:03.168 verify_state_save=0 00:10:03.168 do_verify=1 00:10:03.168 verify=crc32c-intel 00:10:03.168 [job0] 00:10:03.168 filename=/dev/nvme0n1 00:10:03.168 [job1] 00:10:03.168 filename=/dev/nvme0n2 00:10:03.168 [job2] 00:10:03.168 filename=/dev/nvme0n3 00:10:03.168 [job3] 00:10:03.168 filename=/dev/nvme0n4 00:10:03.168 Could not set queue depth (nvme0n1) 00:10:03.168 Could not set queue depth (nvme0n2) 00:10:03.168 Could not set queue depth (nvme0n3) 00:10:03.168 Could not set queue depth (nvme0n4) 00:10:03.428 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.428 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.428 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.428 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.428 fio-3.35 00:10:03.428 Starting 4 threads 00:10:04.814 00:10:04.814 job0: (groupid=0, jobs=1): err= 0: pid=2802440: Tue Nov 26 19:00:21 2024 00:10:04.814 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:04.814 slat (nsec): min=893, max=16213k, avg=104844.03, stdev=876230.83 00:10:04.814 clat (usec): min=4038, max=52505, avg=13500.86, stdev=8343.17 00:10:04.814 lat (usec): min=4046, max=52528, avg=13605.71, stdev=8430.10 00:10:04.814 clat percentiles (usec): 00:10:04.814 | 1.00th=[ 5014], 5.00th=[ 5997], 10.00th=[ 6980], 20.00th=[ 7570], 00:10:04.814 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10683], 00:10:04.814 | 70.00th=[15926], 80.00th=[19006], 90.00th=[26608], 95.00th=[29492], 00:10:04.814 | 99.00th=[43254], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:10:04.814 | 99.99th=[52691] 00:10:04.814 write: IOPS=5103, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:04.814 slat (nsec): min=1549, max=17559k, avg=89253.81, stdev=658329.55 00:10:04.814 clat (usec): min=760, max=101747, avg=12699.08, stdev=12896.31 00:10:04.814 lat (usec): min=1213, max=101757, avg=12788.34, stdev=12972.40 00:10:04.814 clat percentiles (msec): 00:10:04.814 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 7], 00:10:04.814 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:10:04.814 | 70.00th=[ 13], 80.00th=[ 17], 90.00th=[ 19], 95.00th=[ 29], 00:10:04.814 | 99.00th=[ 88], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:10:04.814 | 99.99th=[ 103] 00:10:04.814 bw ( KiB/s): min=16416, max=23504, per=22.97%, avg=19960.00, stdev=5011.97, samples=2 00:10:04.814 iops : min= 4104, max= 5876, avg=4990.00, stdev=1252.99, samples=2 00:10:04.814 lat (usec) : 1000=0.01% 00:10:04.814 lat (msec) : 2=0.13%, 4=1.83%, 10=53.29%, 20=32.35%, 50=10.84% 00:10:04.814 lat (msec) : 100=1.40%, 250=0.14% 00:10:04.814 cpu : usr=3.60%, sys=5.09%, ctx=405, majf=0, minf=1 00:10:04.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:04.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.814 issued rwts: total=4608,5114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.815 job1: (groupid=0, jobs=1): err= 0: pid=2802452: Tue Nov 26 19:00:21 2024 00:10:04.815 read: IOPS=7783, BW=30.4MiB/s (31.9MB/s)(30.5MiB/1003msec) 00:10:04.815 slat (nsec): min=934, max=14398k, avg=62111.23, stdev=426398.38 00:10:04.815 clat (usec): min=2603, max=35448, avg=7877.27, stdev=4131.03 00:10:04.815 lat (usec): min=4153, max=35476, avg=7939.38, stdev=4170.46 00:10:04.815 clat percentiles (usec): 00:10:04.815 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6063], 00:10:04.815 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6587], 00:10:04.815 | 70.00th=[ 6849], 80.00th=[ 7832], 90.00th=[12649], 95.00th=[17171], 00:10:04.815 | 99.00th=[26346], 99.50th=[27657], 99.90th=[32375], 99.95th=[32375], 00:10:04.815 | 99.99th=[35390] 00:10:04.815 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:10:04.815 slat (nsec): min=1607, max=18574k, avg=58606.53, stdev=397697.75 00:10:04.815 clat (usec): min=1168, max=36010, avg=8007.45, stdev=4667.11 00:10:04.815 lat (usec): min=1218, max=36052, avg=8066.06, stdev=4702.11 00:10:04.815 clat percentiles (usec): 00:10:04.815 | 1.00th=[ 4015], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 5866], 00:10:04.815 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6325], 00:10:04.815 | 70.00th=[ 7570], 80.00th=[ 8455], 90.00th=[11731], 95.00th=[19530], 00:10:04.815 | 99.00th=[29492], 99.50th=[29492], 99.90th=[30802], 99.95th=[30802], 00:10:04.815 | 99.99th=[35914] 00:10:04.815 bw ( KiB/s): min=32768, max=32768, per=37.72%, avg=32768.00, stdev= 0.00, samples=2 00:10:04.815 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:10:04.815 lat (msec) : 2=0.01%, 4=0.39%, 10=85.67%, 20=9.91%, 50=4.02% 00:10:04.815 cpu : usr=4.79%, sys=7.29%, ctx=829, majf=0, minf=1 00:10:04.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:04.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.815 issued rwts: total=7807,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.815 job2: (groupid=0, jobs=1): err= 0: pid=2802472: Tue Nov 26 19:00:21 2024 00:10:04.815 read: IOPS=4038, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1002msec) 00:10:04.815 slat (nsec): min=974, max=15489k, avg=116190.15, stdev=885730.83 00:10:04.815 clat (usec): min=1154, max=38961, avg=14751.47, stdev=5694.95 00:10:04.815 lat (usec): min=3555, max=50385, avg=14867.66, stdev=5784.98 00:10:04.815 clat percentiles (usec): 00:10:04.815 | 1.00th=[ 4047], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 9110], 00:10:04.815 | 30.00th=[10159], 40.00th=[12649], 50.00th=[13960], 60.00th=[16057], 00:10:04.815 | 70.00th=[16909], 80.00th=[19268], 90.00th=[22414], 95.00th=[23725], 00:10:04.815 | 99.00th=[30540], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:04.815 | 99.99th=[39060] 00:10:04.815 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:10:04.815 slat (nsec): min=1662, max=15040k, avg=122967.20, stdev=776817.61 00:10:04.815 clat (msec): min=2, max=105, avg=16.04, stdev=16.12 00:10:04.815 lat (msec): min=2, max=105, avg=16.16, stdev=16.23 00:10:04.815 clat percentiles (msec): 00:10:04.815 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:10:04.815 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 14], 00:10:04.815 | 70.00th=[ 16], 80.00th=[ 19], 90.00th=[ 21], 95.00th=[ 49], 00:10:04.815 | 99.00th=[ 97], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:10:04.815 | 99.99th=[ 106] 00:10:04.815 bw ( KiB/s): min=16384, max=16384, per=18.86%, avg=16384.00, stdev= 0.00, samples=2 00:10:04.815 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:04.815 lat (msec) : 2=0.01%, 4=1.62%, 10=32.25%, 20=51.69%, 50=11.99% 00:10:04.815 lat (msec) : 100=1.99%, 250=0.45% 00:10:04.815 cpu : usr=2.40%, sys=5.49%, ctx=397, majf=0, minf=1 00:10:04.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:04.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.815 issued rwts: total=4047,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.815 job3: (groupid=0, jobs=1): err= 0: pid=2802479: Tue Nov 26 19:00:21 2024 00:10:04.815 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:04.815 slat (nsec): min=947, max=14254k, avg=127005.47, stdev=926357.74 00:10:04.815 clat (usec): min=4403, max=41558, avg=15973.07, stdev=6733.17 00:10:04.815 lat (usec): min=4408, max=41566, avg=16100.08, stdev=6807.30 00:10:04.815 clat percentiles (usec): 00:10:04.815 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 8225], 20.00th=[10028], 00:10:04.815 | 30.00th=[11731], 40.00th=[13566], 50.00th=[14746], 60.00th=[16188], 00:10:04.815 | 70.00th=[18482], 80.00th=[21365], 90.00th=[24511], 95.00th=[30802], 00:10:04.815 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35390], 99.95th=[40109], 00:10:04.815 | 99.99th=[41681] 00:10:04.815 write: IOPS=4438, BW=17.3MiB/s (18.2MB/s)(17.5MiB/1007msec); 0 zone resets 00:10:04.815 slat (nsec): min=1651, max=9486.8k, avg=101735.18, stdev=596774.47 00:10:04.815 clat (usec): min=1207, max=60722, avg=13936.85, stdev=9408.92 00:10:04.815 lat (usec): min=1219, max=60725, avg=14038.59, stdev=9462.53 00:10:04.815 clat percentiles (usec): 00:10:04.815 | 1.00th=[ 4113], 5.00th=[ 5276], 10.00th=[ 6063], 20.00th=[ 7242], 00:10:04.815 | 30.00th=[ 8586], 40.00th=[10028], 50.00th=[12256], 60.00th=[13173], 00:10:04.815 | 70.00th=[14222], 80.00th=[18220], 90.00th=[23462], 95.00th=[30016], 00:10:04.815 | 99.00th=[56361], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:10:04.815 | 99.99th=[60556] 00:10:04.815 bw ( KiB/s): min=16960, max=17784, per=20.00%, avg=17372.00, stdev=582.66, samples=2 00:10:04.815 iops : min= 4240, max= 4446, avg=4343.00, stdev=145.66, samples=2 00:10:04.815 lat (msec) : 2=0.02%, 4=0.49%, 10=28.88%, 20=50.84%, 50=18.57% 00:10:04.815 lat (msec) : 100=1.19% 00:10:04.815 cpu : usr=3.58%, sys=4.77%, ctx=341, majf=0, minf=2 00:10:04.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:04.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.815 issued rwts: total=4096,4470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.815 00:10:04.815 Run status group 0 (all jobs): 00:10:04.815 READ: bw=79.7MiB/s (83.6MB/s), 15.8MiB/s-30.4MiB/s (16.5MB/s-31.9MB/s), io=80.3MiB (84.2MB), run=1002-1007msec 00:10:04.815 WRITE: bw=84.8MiB/s (89.0MB/s), 16.0MiB/s-31.9MiB/s (16.7MB/s-33.5MB/s), io=85.4MiB (89.6MB), run=1002-1007msec 00:10:04.815 00:10:04.815 Disk stats (read/write): 00:10:04.815 nvme0n1: ios=3518/3584, merge=0/0, ticks=49856/51630, in_queue=101486, util=86.17% 00:10:04.815 nvme0n2: ios=6768/7168, merge=0/0, ticks=23138/24272, in_queue=47410, util=97.34% 00:10:04.815 nvme0n3: ios=2698/3072, merge=0/0, ticks=45085/56038, in_queue=101123, util=96.39% 00:10:04.815 nvme0n4: ios=3622/3879, merge=0/0, ticks=57083/43186, in_queue=100269, util=91.30% 00:10:04.815 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:04.815 [global] 00:10:04.815 thread=1 00:10:04.815 invalidate=1 00:10:04.815 rw=randwrite 00:10:04.815 time_based=1 00:10:04.815 runtime=1 00:10:04.815 ioengine=libaio 00:10:04.815 direct=1 00:10:04.815 bs=4096 00:10:04.815 iodepth=128 00:10:04.815 norandommap=0 00:10:04.815 numjobs=1 00:10:04.815 00:10:04.815 verify_dump=1 00:10:04.815 verify_backlog=512 00:10:04.815 verify_state_save=0 00:10:04.815 do_verify=1 00:10:04.815 verify=crc32c-intel 00:10:04.815 [job0] 00:10:04.815 filename=/dev/nvme0n1 00:10:04.815 [job1] 00:10:04.815 filename=/dev/nvme0n2 00:10:04.815 [job2] 00:10:04.815 filename=/dev/nvme0n3 00:10:04.815 [job3] 00:10:04.815 filename=/dev/nvme0n4 00:10:04.815 Could not set queue depth (nvme0n1) 00:10:04.815 Could not set queue depth (nvme0n2) 00:10:04.815 Could not set queue depth (nvme0n3) 00:10:04.815 Could not set queue depth (nvme0n4) 00:10:05.075 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.075 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.075 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.075 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.075 fio-3.35 00:10:05.075 Starting 4 threads 00:10:06.463 00:10:06.463 job0: (groupid=0, jobs=1): err= 0: pid=2802915: Tue Nov 26 19:00:23 2024 00:10:06.463 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:10:06.463 slat (nsec): min=1011, max=13233k, avg=110856.38, stdev=813371.22 00:10:06.463 clat (usec): min=4046, max=36205, avg=13335.42, stdev=5923.36 00:10:06.463 lat (usec): min=4057, max=36210, avg=13446.27, stdev=5976.95 00:10:06.463 clat percentiles (usec): 00:10:06.463 | 1.00th=[ 4686], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8455], 00:10:06.463 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[12125], 60.00th=[14091], 00:10:06.463 | 70.00th=[15664], 80.00th=[17171], 90.00th=[20055], 95.00th=[26870], 00:10:06.463 | 99.00th=[32375], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:10:06.463 | 99.99th=[36439] 00:10:06.463 write: IOPS=4496, BW=17.6MiB/s (18.4MB/s)(17.8MiB/1012msec); 0 zone resets 00:10:06.463 slat (nsec): min=1653, max=16169k, avg=114429.48, stdev=635473.32 00:10:06.463 clat (usec): min=2489, max=58206, avg=16157.19, stdev=8492.27 00:10:06.463 lat (usec): min=2496, max=58214, avg=16271.62, stdev=8539.92 00:10:06.463 clat percentiles (usec): 00:10:06.463 | 1.00th=[ 3261], 5.00th=[ 6390], 10.00th=[ 7439], 20.00th=[11731], 00:10:06.463 | 30.00th=[13698], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:10:06.463 | 70.00th=[15008], 80.00th=[17433], 90.00th=[26346], 95.00th=[36439], 00:10:06.463 | 99.00th=[50594], 99.50th=[55313], 99.90th=[57934], 99.95th=[58459], 00:10:06.463 | 99.99th=[58459] 00:10:06.463 bw ( KiB/s): min=16384, max=19000, per=16.87%, avg=17692.00, stdev=1849.79, samples=2 00:10:06.463 iops : min= 4096, max= 4750, avg=4423.00, stdev=462.45, samples=2 00:10:06.463 lat (msec) : 4=1.08%, 10=26.42%, 20=58.52%, 50=13.44%, 100=0.54% 00:10:06.463 cpu : usr=3.26%, sys=4.55%, ctx=501, majf=0, minf=1 00:10:06.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:06.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.464 issued rwts: total=4096,4550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.464 job1: (groupid=0, jobs=1): err= 0: pid=2802933: Tue Nov 26 19:00:23 2024 00:10:06.464 read: IOPS=9134, BW=35.7MiB/s (37.4MB/s)(35.9MiB/1006msec) 00:10:06.464 slat (nsec): min=958, max=6484.0k, avg=57424.51, stdev=415137.70 00:10:06.464 clat (usec): min=2841, max=13760, avg=7566.05, stdev=1680.60 00:10:06.464 lat (usec): min=2845, max=13766, avg=7623.48, stdev=1706.99 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6390], 00:10:06.464 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7504], 00:10:06.464 | 70.00th=[ 7898], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[11076], 00:10:06.464 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:10:06.464 | 99.99th=[13698] 00:10:06.464 write: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1006msec); 0 zone resets 00:10:06.464 slat (nsec): min=1563, max=5837.3k, avg=43785.60, stdev=277193.13 00:10:06.464 clat (usec): min=653, max=13222, avg=6304.05, stdev=1523.15 00:10:06.464 lat (usec): min=678, max=13235, avg=6347.84, stdev=1537.03 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 2343], 5.00th=[ 3556], 10.00th=[ 4146], 20.00th=[ 4883], 00:10:06.464 | 30.00th=[ 5997], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 6915], 00:10:06.464 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7832], 00:10:06.464 | 99.00th=[ 9765], 99.50th=[10683], 99.90th=[12911], 99.95th=[13173], 00:10:06.464 | 99.99th=[13173] 00:10:06.464 bw ( KiB/s): min=36816, max=36912, per=35.14%, avg=36864.00, stdev=67.88, samples=2 00:10:06.464 iops : min= 9204, max= 9228, avg=9216.00, stdev=16.97, samples=2 00:10:06.464 lat (usec) : 750=0.02%, 1000=0.02% 00:10:06.464 lat (msec) : 2=0.41%, 4=4.47%, 10=90.02%, 20=5.05% 00:10:06.464 cpu : usr=7.26%, sys=9.25%, ctx=769, majf=0, minf=2 00:10:06.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:06.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.464 issued rwts: total=9189,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.464 job2: (groupid=0, jobs=1): err= 0: pid=2802953: Tue Nov 26 19:00:23 2024 00:10:06.464 read: IOPS=7712, BW=30.1MiB/s (31.6MB/s)(30.4MiB/1008msec) 00:10:06.464 slat (nsec): min=982, max=7902.2k, avg=68487.66, stdev=501668.61 00:10:06.464 clat (usec): min=2502, max=16021, avg=8685.41, stdev=1917.39 00:10:06.464 lat (usec): min=2511, max=18175, avg=8753.90, stdev=1958.58 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7504], 00:10:06.464 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:10:06.464 | 70.00th=[ 8979], 80.00th=[10028], 90.00th=[11600], 95.00th=[13042], 00:10:06.464 | 99.00th=[14353], 99.50th=[14746], 99.90th=[15401], 99.95th=[15533], 00:10:06.464 | 99.99th=[16057] 00:10:06.464 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1008msec); 0 zone resets 00:10:06.464 slat (nsec): min=1589, max=6541.1k, avg=51687.17, stdev=319181.39 00:10:06.464 clat (usec): min=1194, max=15574, avg=7368.88, stdev=1783.07 00:10:06.464 lat (usec): min=1203, max=15577, avg=7420.57, stdev=1798.23 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 2999], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5800], 00:10:06.464 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8029], 00:10:06.464 | 70.00th=[ 8094], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[10421], 00:10:06.464 | 99.00th=[11994], 99.50th=[13829], 99.90th=[15401], 99.95th=[15533], 00:10:06.464 | 99.99th=[15533] 00:10:06.464 bw ( KiB/s): min=32504, max=32768, per=31.11%, avg=32636.00, stdev=186.68, samples=2 00:10:06.464 iops : min= 8126, max= 8192, avg=8159.00, stdev=46.67, samples=2 00:10:06.464 lat (msec) : 2=0.09%, 4=2.22%, 10=83.81%, 20=13.88% 00:10:06.464 cpu : usr=5.56%, sys=8.74%, ctx=738, majf=0, minf=1 00:10:06.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:06.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.464 issued rwts: total=7774,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.464 job3: (groupid=0, jobs=1): err= 0: pid=2802961: Tue Nov 26 19:00:23 2024 00:10:06.464 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:10:06.464 slat (nsec): min=983, max=17929k, avg=131634.87, stdev=932185.97 00:10:06.464 clat (usec): min=3968, max=54128, avg=14887.56, stdev=7504.19 00:10:06.464 lat (usec): min=3977, max=54130, avg=15019.20, stdev=7581.59 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 5932], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 8979], 00:10:06.464 | 30.00th=[ 9896], 40.00th=[12518], 50.00th=[13829], 60.00th=[14484], 00:10:06.464 | 70.00th=[15533], 80.00th=[17957], 90.00th=[25297], 95.00th=[28443], 00:10:06.464 | 99.00th=[44827], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:10:06.464 | 99.99th=[54264] 00:10:06.464 write: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1012msec); 0 zone resets 00:10:06.464 slat (nsec): min=1597, max=9209.0k, avg=95516.56, stdev=454038.16 00:10:06.464 clat (usec): min=1174, max=54126, avg=14705.77, stdev=5871.17 00:10:06.464 lat (usec): min=1184, max=54128, avg=14801.28, stdev=5886.78 00:10:06.464 clat percentiles (usec): 00:10:06.464 | 1.00th=[ 4424], 5.00th=[ 5800], 10.00th=[ 7832], 20.00th=[10945], 00:10:06.464 | 30.00th=[13566], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:10:06.464 | 70.00th=[15008], 80.00th=[15533], 90.00th=[20055], 95.00th=[25822], 00:10:06.464 | 99.00th=[38011], 99.50th=[38536], 99.90th=[44827], 99.95th=[45351], 00:10:06.464 | 99.99th=[54264] 00:10:06.464 bw ( KiB/s): min=15560, max=20080, per=16.99%, avg=17820.00, stdev=3196.12, samples=2 00:10:06.464 iops : min= 3890, max= 5020, avg=4455.00, stdev=799.03, samples=2 00:10:06.464 lat (msec) : 2=0.10%, 4=0.31%, 10=22.32%, 20=65.91%, 50=11.09% 00:10:06.464 lat (msec) : 100=0.27% 00:10:06.464 cpu : usr=2.77%, sys=4.85%, ctx=511, majf=0, minf=2 00:10:06.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:06.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.464 issued rwts: total=4096,4582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.464 00:10:06.464 Run status group 0 (all jobs): 00:10:06.464 READ: bw=97.1MiB/s (102MB/s), 15.8MiB/s-35.7MiB/s (16.6MB/s-37.4MB/s), io=98.3MiB (103MB), run=1006-1012msec 00:10:06.464 WRITE: bw=102MiB/s (107MB/s), 17.6MiB/s-35.8MiB/s (18.4MB/s-37.5MB/s), io=104MiB (109MB), run=1006-1012msec 00:10:06.464 00:10:06.464 Disk stats (read/write): 00:10:06.464 nvme0n1: ios=3482/3584, merge=0/0, ticks=44437/58747, in_queue=103184, util=99.00% 00:10:06.464 nvme0n2: ios=7712/7703, merge=0/0, ticks=54923/45715, in_queue=100638, util=91.03% 00:10:06.464 nvme0n3: ios=6550/6656, merge=0/0, ticks=54171/46782, in_queue=100953, util=88.40% 00:10:06.464 nvme0n4: ios=3575/3591, merge=0/0, ticks=51532/50676, in_queue=102208, util=89.54% 00:10:06.464 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:06.464 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2803120 00:10:06.464 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:06.464 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:06.464 [global] 00:10:06.464 thread=1 00:10:06.464 invalidate=1 00:10:06.464 rw=read 00:10:06.464 time_based=1 00:10:06.464 runtime=10 00:10:06.464 ioengine=libaio 00:10:06.464 direct=1 00:10:06.464 bs=4096 00:10:06.464 iodepth=1 00:10:06.464 norandommap=1 00:10:06.464 numjobs=1 00:10:06.464 00:10:06.464 [job0] 00:10:06.464 filename=/dev/nvme0n1 00:10:06.464 [job1] 00:10:06.464 filename=/dev/nvme0n2 00:10:06.464 [job2] 00:10:06.464 filename=/dev/nvme0n3 00:10:06.464 [job3] 00:10:06.464 filename=/dev/nvme0n4 00:10:06.464 Could not set queue depth (nvme0n1) 00:10:06.464 Could not set queue depth (nvme0n2) 00:10:06.464 Could not set queue depth (nvme0n3) 00:10:06.464 Could not set queue depth (nvme0n4) 00:10:07.034 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.034 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.034 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.034 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.034 fio-3.35 00:10:07.034 Starting 4 threads 00:10:09.582 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:09.582 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:09.582 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:10:09.582 fio: pid=2803447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.842 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=454656, buflen=4096 00:10:09.842 fio: pid=2803440, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.842 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.842 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:10.103 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=12201984, buflen=4096 00:10:10.103 fio: pid=2803407, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.103 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.103 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:10.103 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3534848, buflen=4096 00:10:10.103 fio: pid=2803422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.103 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.103 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:10.103 00:10:10.103 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2803407: Tue Nov 26 19:00:27 2024 00:10:10.103 read: IOPS=1008, BW=4034KiB/s (4131kB/s)(11.6MiB/2954msec) 00:10:10.103 slat (usec): min=5, max=33360, avg=53.14, stdev=856.61 00:10:10.103 clat (usec): min=362, max=41476, avg=923.10, stdev=754.07 00:10:10.103 lat (usec): min=389, max=41485, avg=976.25, stdev=1142.34 00:10:10.103 clat percentiles (usec): 00:10:10.103 | 1.00th=[ 594], 5.00th=[ 709], 10.00th=[ 758], 20.00th=[ 791], 00:10:10.103 | 30.00th=[ 832], 40.00th=[ 889], 50.00th=[ 947], 60.00th=[ 971], 00:10:10.103 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:10.103 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1450], 99.95th=[ 3195], 00:10:10.103 | 99.99th=[41681] 00:10:10.103 bw ( KiB/s): min= 4008, max= 4664, per=82.32%, avg=4216.00, stdev=276.03, samples=5 00:10:10.103 iops : min= 1002, max= 1166, avg=1054.00, stdev=69.01, samples=5 00:10:10.103 lat (usec) : 500=0.34%, 750=8.56%, 1000=66.58% 00:10:10.103 lat (msec) : 2=24.43%, 4=0.03%, 50=0.03% 00:10:10.103 cpu : usr=1.96%, sys=3.56%, ctx=2984, majf=0, minf=2 00:10:10.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.103 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.103 issued rwts: total=2980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.103 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2803422: Tue Nov 26 19:00:27 2024 00:10:10.104 read: IOPS=275, BW=1101KiB/s (1127kB/s)(3452KiB/3136msec) 00:10:10.104 slat (usec): min=6, max=16904, avg=79.17, stdev=811.91 00:10:10.104 clat (usec): min=696, max=42228, avg=3522.02, stdev=9698.42 00:10:10.104 lat (usec): min=703, max=42254, avg=3592.37, stdev=9718.21 00:10:10.104 clat percentiles (usec): 00:10:10.104 | 1.00th=[ 775], 5.00th=[ 889], 10.00th=[ 955], 20.00th=[ 1020], 00:10:10.104 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:10.104 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[41157], 00:10:10.104 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:10.104 | 99.99th=[42206] 00:10:10.104 bw ( KiB/s): min= 720, max= 2000, per=21.48%, avg=1100.33, stdev=471.19, samples=6 00:10:10.104 iops : min= 180, max= 500, avg=275.00, stdev=117.81, samples=6 00:10:10.104 lat (usec) : 750=0.46%, 1000=17.01% 00:10:10.104 lat (msec) : 2=76.39%, 50=6.02% 00:10:10.104 cpu : usr=0.45%, sys=1.02%, ctx=869, majf=0, minf=2 00:10:10.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.104 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2803440: Tue Nov 26 19:00:27 2024 00:10:10.104 read: IOPS=40, BW=160KiB/s (164kB/s)(444KiB/2776msec) 00:10:10.104 slat (usec): min=9, max=15526, avg=164.20, stdev=1464.68 00:10:10.104 clat (usec): min=648, max=43136, avg=24627.56, stdev=20371.31 00:10:10.104 lat (usec): min=674, max=57981, avg=24793.00, stdev=20546.66 00:10:10.104 clat percentiles (usec): 00:10:10.104 | 1.00th=[ 660], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 979], 00:10:10.104 | 30.00th=[ 1029], 40.00th=[ 1385], 50.00th=[41681], 60.00th=[42206], 00:10:10.104 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:10:10.104 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:10.104 | 99.99th=[43254] 00:10:10.104 bw ( KiB/s): min= 88, max= 456, per=3.28%, avg=168.00, stdev=161.10, samples=5 00:10:10.104 iops : min= 22, max= 114, avg=42.00, stdev=40.27, samples=5 00:10:10.104 lat (usec) : 750=1.79%, 1000=24.11% 00:10:10.104 lat (msec) : 2=16.07%, 50=57.14% 00:10:10.104 cpu : usr=0.00%, sys=0.18%, ctx=113, majf=0, minf=1 00:10:10.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.104 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2803447: Tue Nov 26 19:00:27 2024 00:10:10.104 read: IOPS=24, BW=95.3KiB/s (97.6kB/s)(248KiB/2603msec) 00:10:10.104 slat (nsec): min=24999, max=43276, avg=25609.38, stdev=2268.14 00:10:10.104 clat (usec): min=930, max=43085, avg=41592.61, stdev=5271.43 00:10:10.104 lat (usec): min=973, max=43110, avg=41618.22, stdev=5269.16 00:10:10.104 clat percentiles (usec): 00:10:10.104 | 1.00th=[ 930], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:10.104 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:10.104 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:10:10.104 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:10.104 | 99.99th=[43254] 00:10:10.104 bw ( KiB/s): min= 95, max= 96, per=1.86%, avg=95.80, stdev= 0.45, samples=5 00:10:10.104 iops : min= 23, max= 24, avg=23.80, stdev= 0.45, samples=5 00:10:10.104 lat (usec) : 1000=1.59% 00:10:10.104 lat (msec) : 50=96.83% 00:10:10.104 cpu : usr=0.12%, sys=0.00%, ctx=63, majf=0, minf=1 00:10:10.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.104 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.104 00:10:10.104 Run status group 0 (all jobs): 00:10:10.104 READ: bw=5121KiB/s (5244kB/s), 95.3KiB/s-4034KiB/s (97.6kB/s-4131kB/s), io=15.7MiB (16.4MB), run=2603-3136msec 00:10:10.104 00:10:10.104 Disk stats (read/write): 00:10:10.104 nvme0n1: ios=2890/0, merge=0/0, ticks=2418/0, in_queue=2418, util=92.09% 00:10:10.104 nvme0n2: ios=847/0, merge=0/0, ticks=2939/0, in_queue=2939, util=94.58% 00:10:10.104 nvme0n3: ios=107/0, merge=0/0, ticks=2565/0, in_queue=2565, util=96.03% 00:10:10.104 nvme0n4: ios=62/0, merge=0/0, ticks=2581/0, in_queue=2581, util=96.42% 00:10:10.365 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.365 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:10.627 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.627 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:10.627 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.627 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:10.888 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.888 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2803120 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:11.149 nvmf hotplug test: fio failed as expected 00:10:11.149 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.412 rmmod nvme_tcp 00:10:11.412 rmmod nvme_fabrics 00:10:11.412 rmmod nvme_keyring 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2799225 ']' 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2799225 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2799225 ']' 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2799225 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2799225 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2799225' 00:10:11.412 killing process with pid 2799225 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2799225 00:10:11.412 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2799225 00:10:11.673 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.673 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.674 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.588 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.588 00:10:13.588 real 0m29.306s 00:10:13.588 user 2m38.421s 00:10:13.588 sys 0m9.360s 00:10:13.588 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.588 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.588 ************************************ 00:10:13.588 END TEST nvmf_fio_target 00:10:13.588 ************************************ 00:10:13.850 19:00:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 ************************************ 00:10:13.851 START TEST nvmf_bdevio 00:10:13.851 ************************************ 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:13.851 * Looking for test storage... 00:10:13.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.851 19:00:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.851 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.113 --rc genhtml_branch_coverage=1 00:10:14.113 --rc genhtml_function_coverage=1 00:10:14.113 --rc genhtml_legend=1 00:10:14.113 --rc geninfo_all_blocks=1 00:10:14.113 --rc geninfo_unexecuted_blocks=1 00:10:14.113 00:10:14.113 ' 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.113 --rc genhtml_branch_coverage=1 00:10:14.113 --rc genhtml_function_coverage=1 00:10:14.113 --rc genhtml_legend=1 00:10:14.113 --rc geninfo_all_blocks=1 00:10:14.113 --rc geninfo_unexecuted_blocks=1 00:10:14.113 00:10:14.113 ' 00:10:14.113 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.114 --rc genhtml_branch_coverage=1 00:10:14.114 --rc genhtml_function_coverage=1 00:10:14.114 --rc genhtml_legend=1 00:10:14.114 --rc geninfo_all_blocks=1 00:10:14.114 --rc geninfo_unexecuted_blocks=1 00:10:14.114 00:10:14.114 ' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.114 --rc genhtml_branch_coverage=1 00:10:14.114 --rc genhtml_function_coverage=1 00:10:14.114 --rc genhtml_legend=1 00:10:14.114 --rc geninfo_all_blocks=1 00:10:14.114 --rc geninfo_unexecuted_blocks=1 00:10:14.114 00:10:14.114 ' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.114 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:22.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:22.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:22.264 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:22.264 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:10:22.264 00:10:22.264 --- 10.0.0.2 ping statistics --- 00:10:22.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.264 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:10:22.264 00:10:22.264 --- 10.0.0.1 ping statistics --- 00:10:22.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.264 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2808672 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2808672 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2808672 ']' 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.264 19:00:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.264 [2024-11-26 19:00:38.677373] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:10:22.265 [2024-11-26 19:00:38.677440] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.265 [2024-11-26 19:00:38.776572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.265 [2024-11-26 19:00:38.828860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.265 [2024-11-26 19:00:38.828913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.265 [2024-11-26 19:00:38.828921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.265 [2024-11-26 19:00:38.828929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.265 [2024-11-26 19:00:38.828935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.265 [2024-11-26 19:00:38.830968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.265 [2024-11-26 19:00:38.831129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.265 [2024-11-26 19:00:38.831260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.265 [2024-11-26 19:00:38.831260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 [2024-11-26 19:00:39.560188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 Malloc0 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 [2024-11-26 19:00:39.635797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:22.526 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:22.526 { 00:10:22.526 "params": { 00:10:22.526 "name": "Nvme$subsystem", 00:10:22.526 "trtype": "$TEST_TRANSPORT", 00:10:22.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.526 "adrfam": "ipv4", 00:10:22.526 "trsvcid": "$NVMF_PORT", 00:10:22.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.527 "hdgst": ${hdgst:-false}, 00:10:22.527 "ddgst": ${ddgst:-false} 00:10:22.527 }, 00:10:22.527 "method": "bdev_nvme_attach_controller" 00:10:22.527 } 00:10:22.527 EOF 00:10:22.527 )") 00:10:22.527 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:22.527 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:22.527 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:22.527 19:00:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:22.527 "params": { 00:10:22.527 "name": "Nvme1", 00:10:22.527 "trtype": "tcp", 00:10:22.527 "traddr": "10.0.0.2", 00:10:22.527 "adrfam": "ipv4", 00:10:22.527 "trsvcid": "4420", 00:10:22.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.527 "hdgst": false, 00:10:22.527 "ddgst": false 00:10:22.527 }, 00:10:22.527 "method": "bdev_nvme_attach_controller" 00:10:22.527 }' 00:10:22.527 [2024-11-26 19:00:39.693635] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:10:22.527 [2024-11-26 19:00:39.693705] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808725 ] 00:10:22.789 [2024-11-26 19:00:39.788138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:22.789 [2024-11-26 19:00:39.845208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.789 [2024-11-26 19:00:39.845331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.789 [2024-11-26 19:00:39.845332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.051 I/O targets: 00:10:23.051 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.051 00:10:23.051 00:10:23.051 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.051 http://cunit.sourceforge.net/ 00:10:23.051 00:10:23.051 00:10:23.051 Suite: bdevio tests on: Nvme1n1 00:10:23.051 Test: blockdev write read block ...passed 00:10:23.051 Test: blockdev write zeroes read block ...passed 00:10:23.051 Test: blockdev write zeroes read no split ...passed 00:10:23.051 Test: blockdev write zeroes read split ...passed 00:10:23.051 Test: blockdev write zeroes read split partial ...passed 00:10:23.051 Test: blockdev reset ...[2024-11-26 19:00:40.186151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:23.051 [2024-11-26 19:00:40.186259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f970 (9): Bad file descriptor 00:10:23.052 [2024-11-26 19:00:40.206007] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:23.052 passed 00:10:23.052 Test: blockdev write read 8 blocks ...passed 00:10:23.052 Test: blockdev write read size > 128k ...passed 00:10:23.052 Test: blockdev write read invalid size ...passed 00:10:23.052 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.052 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.052 Test: blockdev write read max offset ...passed 00:10:23.313 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.313 Test: blockdev writev readv 8 blocks ...passed 00:10:23.313 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.313 Test: blockdev writev readv block ...passed 00:10:23.313 Test: blockdev writev readv size > 128k ...passed 00:10:23.313 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.313 Test: blockdev comparev and writev ...[2024-11-26 19:00:40.471219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.471271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.471287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.471297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.471793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.471806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.471822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.471832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.472351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.472371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.472387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.472395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.472911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.472924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.313 [2024-11-26 19:00:40.472939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.313 [2024-11-26 19:00:40.472949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.313 passed 00:10:23.574 Test: blockdev nvme passthru rw ...passed 00:10:23.574 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:00:40.558901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.574 [2024-11-26 19:00:40.558916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.574 [2024-11-26 19:00:40.559310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.574 [2024-11-26 19:00:40.559323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.574 [2024-11-26 19:00:40.559727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.574 [2024-11-26 19:00:40.559737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.574 [2024-11-26 19:00:40.560119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.574 [2024-11-26 19:00:40.560130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.574 passed 00:10:23.574 Test: blockdev nvme admin passthru ...passed 00:10:23.574 Test: blockdev copy ...passed 00:10:23.574 00:10:23.574 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.574 suites 1 1 n/a 0 0 00:10:23.574 tests 23 23 23 0 0 00:10:23.574 asserts 152 152 152 0 n/a 00:10:23.574 00:10:23.574 Elapsed time = 1.211 seconds 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:23.574 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.575 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.575 rmmod nvme_tcp 00:10:23.575 rmmod nvme_fabrics 00:10:23.836 rmmod nvme_keyring 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2808672 ']' 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2808672 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2808672 ']' 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2808672 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808672 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808672' 00:10:23.836 killing process with pid 2808672 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2808672 00:10:23.836 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2808672 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.837 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.402 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.403 00:10:26.403 real 0m12.245s 00:10:26.403 user 0m13.078s 00:10:26.403 sys 0m6.307s 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.403 ************************************ 00:10:26.403 END TEST nvmf_bdevio 00:10:26.403 ************************************ 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:26.403 00:10:26.403 real 5m5.315s 00:10:26.403 user 11m54.726s 00:10:26.403 sys 1m52.367s 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.403 ************************************ 00:10:26.403 END TEST nvmf_target_core 00:10:26.403 ************************************ 00:10:26.403 19:00:43 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.403 19:00:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.403 19:00:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.403 19:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.403 ************************************ 00:10:26.403 START TEST nvmf_target_extra 00:10:26.403 ************************************ 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.403 * Looking for test storage... 00:10:26.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.403 --rc genhtml_branch_coverage=1 00:10:26.403 --rc genhtml_function_coverage=1 00:10:26.403 --rc genhtml_legend=1 00:10:26.403 --rc geninfo_all_blocks=1 00:10:26.403 --rc geninfo_unexecuted_blocks=1 00:10:26.403 00:10:26.403 ' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.403 --rc genhtml_branch_coverage=1 00:10:26.403 --rc genhtml_function_coverage=1 00:10:26.403 --rc genhtml_legend=1 00:10:26.403 --rc geninfo_all_blocks=1 00:10:26.403 --rc geninfo_unexecuted_blocks=1 00:10:26.403 00:10:26.403 ' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.403 --rc genhtml_branch_coverage=1 00:10:26.403 --rc genhtml_function_coverage=1 00:10:26.403 --rc genhtml_legend=1 00:10:26.403 --rc geninfo_all_blocks=1 00:10:26.403 --rc geninfo_unexecuted_blocks=1 00:10:26.403 00:10:26.403 ' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.403 --rc genhtml_branch_coverage=1 00:10:26.403 --rc genhtml_function_coverage=1 00:10:26.403 --rc genhtml_legend=1 00:10:26.403 --rc geninfo_all_blocks=1 00:10:26.403 --rc geninfo_unexecuted_blocks=1 00:10:26.403 00:10:26.403 ' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.403 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.404 ************************************ 00:10:26.404 START TEST nvmf_example 00:10:26.404 ************************************ 00:10:26.404 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.666 * Looking for test storage... 00:10:26.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:26.666 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.667 --rc genhtml_branch_coverage=1 00:10:26.667 --rc genhtml_function_coverage=1 00:10:26.667 --rc genhtml_legend=1 00:10:26.667 --rc geninfo_all_blocks=1 00:10:26.667 --rc geninfo_unexecuted_blocks=1 00:10:26.667 00:10:26.667 ' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.667 --rc genhtml_branch_coverage=1 00:10:26.667 --rc genhtml_function_coverage=1 00:10:26.667 --rc genhtml_legend=1 00:10:26.667 --rc geninfo_all_blocks=1 00:10:26.667 --rc geninfo_unexecuted_blocks=1 00:10:26.667 00:10:26.667 ' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.667 --rc genhtml_branch_coverage=1 00:10:26.667 --rc genhtml_function_coverage=1 00:10:26.667 --rc genhtml_legend=1 00:10:26.667 --rc geninfo_all_blocks=1 00:10:26.667 --rc geninfo_unexecuted_blocks=1 00:10:26.667 00:10:26.667 ' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.667 --rc genhtml_branch_coverage=1 00:10:26.667 --rc genhtml_function_coverage=1 00:10:26.667 --rc genhtml_legend=1 00:10:26.667 --rc geninfo_all_blocks=1 00:10:26.667 --rc geninfo_unexecuted_blocks=1 00:10:26.667 00:10:26.667 ' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.667 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.668 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:26.668 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:26.668 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:26.668 19:00:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:34.873 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:34.873 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:34.873 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:34.873 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:34.873 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.874 19:00:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:34.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:10:34.874 00:10:34.874 --- 10.0.0.2 ping statistics --- 00:10:34.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.874 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:34.874 00:10:34.874 --- 10.0.0.1 ping statistics --- 00:10:34.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.874 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2813435 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2813435 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2813435 ']' 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.874 19:00:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:35.136 19:00:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:47.504 Initializing NVMe Controllers 00:10:47.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:47.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:47.504 Initialization complete. Launching workers. 00:10:47.504 ======================================================== 00:10:47.504 Latency(us) 00:10:47.504 Device Information : IOPS MiB/s Average min max 00:10:47.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19001.30 74.22 3369.95 632.21 16296.98 00:10:47.504 ======================================================== 00:10:47.504 Total : 19001.30 74.22 3369.95 632.21 16296.98 00:10:47.504 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.504 rmmod nvme_tcp 00:10:47.504 rmmod nvme_fabrics 00:10:47.504 rmmod nvme_keyring 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2813435 ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2813435 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2813435 ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2813435 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813435 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813435' 00:10:47.504 killing process with pid 2813435 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2813435 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2813435 00:10:47.504 nvmf threads initialize successfully 00:10:47.504 bdev subsystem init successfully 00:10:47.504 created a nvmf target service 00:10:47.504 create targets's poll groups done 00:10:47.504 all subsystems of target started 00:10:47.504 nvmf target is running 00:10:47.504 all subsystems of target stopped 00:10:47.504 destroy targets's poll groups done 00:10:47.504 destroyed the nvmf target service 00:10:47.504 bdev subsystem finish successfully 00:10:47.504 nvmf threads destroy successfully 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.504 19:01:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.765 19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.765 19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:47.766 19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.766 19:01:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.026 00:10:48.026 real 0m21.492s 00:10:48.026 user 0m46.874s 00:10:48.026 sys 0m6.974s 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.026 ************************************ 00:10:48.026 END TEST nvmf_example 00:10:48.026 ************************************ 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.026 19:01:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.026 ************************************ 00:10:48.026 START TEST nvmf_filesystem 00:10:48.026 ************************************ 00:10:48.027 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:48.027 * Looking for test storage... 00:10:48.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.027 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.027 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.027 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.290 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.291 --rc genhtml_branch_coverage=1 00:10:48.291 --rc genhtml_function_coverage=1 00:10:48.291 --rc genhtml_legend=1 00:10:48.291 --rc geninfo_all_blocks=1 00:10:48.291 --rc geninfo_unexecuted_blocks=1 00:10:48.291 00:10:48.291 ' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.291 --rc genhtml_branch_coverage=1 00:10:48.291 --rc genhtml_function_coverage=1 00:10:48.291 --rc genhtml_legend=1 00:10:48.291 --rc geninfo_all_blocks=1 00:10:48.291 --rc geninfo_unexecuted_blocks=1 00:10:48.291 00:10:48.291 ' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.291 --rc genhtml_branch_coverage=1 00:10:48.291 --rc genhtml_function_coverage=1 00:10:48.291 --rc genhtml_legend=1 00:10:48.291 --rc geninfo_all_blocks=1 00:10:48.291 --rc geninfo_unexecuted_blocks=1 00:10:48.291 00:10:48.291 ' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.291 --rc genhtml_branch_coverage=1 00:10:48.291 --rc genhtml_function_coverage=1 00:10:48.291 --rc genhtml_legend=1 00:10:48.291 --rc geninfo_all_blocks=1 00:10:48.291 --rc geninfo_unexecuted_blocks=1 00:10:48.291 00:10:48.291 ' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:48.291 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:48.292 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:48.292 #define SPDK_CONFIG_H 00:10:48.292 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:48.292 #define SPDK_CONFIG_APPS 1 00:10:48.292 #define SPDK_CONFIG_ARCH native 00:10:48.292 #undef SPDK_CONFIG_ASAN 00:10:48.292 #undef SPDK_CONFIG_AVAHI 00:10:48.292 #undef SPDK_CONFIG_CET 00:10:48.292 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:48.292 #define SPDK_CONFIG_COVERAGE 1 00:10:48.292 #define SPDK_CONFIG_CROSS_PREFIX 00:10:48.292 #undef SPDK_CONFIG_CRYPTO 00:10:48.293 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:48.293 #undef SPDK_CONFIG_CUSTOMOCF 00:10:48.293 #undef SPDK_CONFIG_DAOS 00:10:48.293 #define SPDK_CONFIG_DAOS_DIR 00:10:48.293 #define SPDK_CONFIG_DEBUG 1 00:10:48.293 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:48.293 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:48.293 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:48.293 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:48.293 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:48.293 #undef SPDK_CONFIG_DPDK_UADK 00:10:48.293 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:48.293 #define SPDK_CONFIG_EXAMPLES 1 00:10:48.293 #undef SPDK_CONFIG_FC 00:10:48.293 #define SPDK_CONFIG_FC_PATH 00:10:48.293 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:48.293 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:48.293 #define SPDK_CONFIG_FSDEV 1 00:10:48.293 #undef SPDK_CONFIG_FUSE 00:10:48.293 #undef SPDK_CONFIG_FUZZER 00:10:48.293 #define SPDK_CONFIG_FUZZER_LIB 00:10:48.293 #undef SPDK_CONFIG_GOLANG 00:10:48.293 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:48.293 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:48.293 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:48.293 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:48.293 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:48.293 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:48.293 #undef SPDK_CONFIG_HAVE_LZ4 00:10:48.293 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:48.293 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:48.293 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:48.293 #define SPDK_CONFIG_IDXD 1 00:10:48.293 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:48.293 #undef SPDK_CONFIG_IPSEC_MB 00:10:48.293 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:48.293 #define SPDK_CONFIG_ISAL 1 00:10:48.293 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:48.293 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:48.293 #define SPDK_CONFIG_LIBDIR 00:10:48.293 #undef SPDK_CONFIG_LTO 00:10:48.293 #define SPDK_CONFIG_MAX_LCORES 128 00:10:48.293 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:48.293 #define SPDK_CONFIG_NVME_CUSE 1 00:10:48.293 #undef SPDK_CONFIG_OCF 00:10:48.293 #define SPDK_CONFIG_OCF_PATH 00:10:48.293 #define SPDK_CONFIG_OPENSSL_PATH 00:10:48.293 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:48.293 #define SPDK_CONFIG_PGO_DIR 00:10:48.293 #undef SPDK_CONFIG_PGO_USE 00:10:48.293 #define SPDK_CONFIG_PREFIX /usr/local 00:10:48.293 #undef SPDK_CONFIG_RAID5F 00:10:48.293 #undef SPDK_CONFIG_RBD 00:10:48.293 #define SPDK_CONFIG_RDMA 1 00:10:48.293 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:48.293 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:48.293 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:48.293 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:48.293 #define SPDK_CONFIG_SHARED 1 00:10:48.293 #undef SPDK_CONFIG_SMA 00:10:48.293 #define SPDK_CONFIG_TESTS 1 00:10:48.293 #undef SPDK_CONFIG_TSAN 00:10:48.293 #define SPDK_CONFIG_UBLK 1 00:10:48.293 #define SPDK_CONFIG_UBSAN 1 00:10:48.293 #undef SPDK_CONFIG_UNIT_TESTS 00:10:48.293 #undef SPDK_CONFIG_URING 00:10:48.293 #define SPDK_CONFIG_URING_PATH 00:10:48.293 #undef SPDK_CONFIG_URING_ZNS 00:10:48.293 #undef SPDK_CONFIG_USDT 00:10:48.293 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:48.293 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:48.293 #define SPDK_CONFIG_VFIO_USER 1 00:10:48.293 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:48.293 #define SPDK_CONFIG_VHOST 1 00:10:48.293 #define SPDK_CONFIG_VIRTIO 1 00:10:48.293 #undef SPDK_CONFIG_VTUNE 00:10:48.293 #define SPDK_CONFIG_VTUNE_DIR 00:10:48.293 #define SPDK_CONFIG_WERROR 1 00:10:48.293 #define SPDK_CONFIG_WPDK_DIR 00:10:48.293 #undef SPDK_CONFIG_XNVME 00:10:48.293 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:48.293 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:48.294 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:48.295 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2816235 ]] 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2816235 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ITjWsj 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:48.296 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ITjWsj/tests/target /tmp/spdk.ITjWsj 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118207614976 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11148894208 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677265408 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=991232 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:48.297 * Looking for test storage... 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118207614976 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13363486720 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:48.297 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.298 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.560 --rc genhtml_branch_coverage=1 00:10:48.560 --rc genhtml_function_coverage=1 00:10:48.560 --rc genhtml_legend=1 00:10:48.560 --rc geninfo_all_blocks=1 00:10:48.560 --rc geninfo_unexecuted_blocks=1 00:10:48.560 00:10:48.560 ' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.560 --rc genhtml_branch_coverage=1 00:10:48.560 --rc genhtml_function_coverage=1 00:10:48.560 --rc genhtml_legend=1 00:10:48.560 --rc geninfo_all_blocks=1 00:10:48.560 --rc geninfo_unexecuted_blocks=1 00:10:48.560 00:10:48.560 ' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.560 --rc genhtml_branch_coverage=1 00:10:48.560 --rc genhtml_function_coverage=1 00:10:48.560 --rc genhtml_legend=1 00:10:48.560 --rc geninfo_all_blocks=1 00:10:48.560 --rc geninfo_unexecuted_blocks=1 00:10:48.560 00:10:48.560 ' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.560 --rc genhtml_branch_coverage=1 00:10:48.560 --rc genhtml_function_coverage=1 00:10:48.560 --rc genhtml_legend=1 00:10:48.560 --rc geninfo_all_blocks=1 00:10:48.560 --rc geninfo_unexecuted_blocks=1 00:10:48.560 00:10:48.560 ' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.560 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.561 19:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:56.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:56.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:56.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:56.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.708 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.709 19:01:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:10:56.709 00:10:56.709 --- 10.0.0.2 ping statistics --- 00:10:56.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.709 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:10:56.709 00:10:56.709 --- 10.0.0.1 ping statistics --- 00:10:56.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.709 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 ************************************ 00:10:56.709 START TEST nvmf_filesystem_no_in_capsule 00:10:56.709 ************************************ 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2819874 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2819874 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2819874 ']' 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.709 19:01:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 [2024-11-26 19:01:13.225789] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:10:56.709 [2024-11-26 19:01:13.225855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.709 [2024-11-26 19:01:13.327454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.709 [2024-11-26 19:01:13.381626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.709 [2024-11-26 19:01:13.381682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.709 [2024-11-26 19:01:13.381690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.709 [2024-11-26 19:01:13.381698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.709 [2024-11-26 19:01:13.381705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.709 [2024-11-26 19:01:13.383778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.709 [2024-11-26 19:01:13.383940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.709 [2024-11-26 19:01:13.384102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.709 [2024-11-26 19:01:13.384103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.971 [2024-11-26 19:01:14.105763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.971 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 Malloc1 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 [2024-11-26 19:01:14.266441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:57.232 { 00:10:57.232 "name": "Malloc1", 00:10:57.232 "aliases": [ 00:10:57.232 "a360328b-feb1-4692-b7f0-1104f4cb64e4" 00:10:57.232 ], 00:10:57.232 "product_name": "Malloc disk", 00:10:57.232 "block_size": 512, 00:10:57.232 "num_blocks": 1048576, 00:10:57.232 "uuid": "a360328b-feb1-4692-b7f0-1104f4cb64e4", 00:10:57.232 "assigned_rate_limits": { 00:10:57.232 "rw_ios_per_sec": 0, 00:10:57.232 "rw_mbytes_per_sec": 0, 00:10:57.232 "r_mbytes_per_sec": 0, 00:10:57.232 "w_mbytes_per_sec": 0 00:10:57.232 }, 00:10:57.232 "claimed": true, 00:10:57.232 "claim_type": "exclusive_write", 00:10:57.232 "zoned": false, 00:10:57.232 "supported_io_types": { 00:10:57.232 "read": true, 00:10:57.232 "write": true, 00:10:57.232 "unmap": true, 00:10:57.232 "flush": true, 00:10:57.232 "reset": true, 00:10:57.232 "nvme_admin": false, 00:10:57.232 "nvme_io": false, 00:10:57.232 "nvme_io_md": false, 00:10:57.232 "write_zeroes": true, 00:10:57.232 "zcopy": true, 00:10:57.232 "get_zone_info": false, 00:10:57.232 "zone_management": false, 00:10:57.232 "zone_append": false, 00:10:57.232 "compare": false, 00:10:57.232 "compare_and_write": false, 00:10:57.232 "abort": true, 00:10:57.232 "seek_hole": false, 00:10:57.232 "seek_data": false, 00:10:57.232 "copy": true, 00:10:57.232 "nvme_iov_md": false 00:10:57.232 }, 00:10:57.232 "memory_domains": [ 00:10:57.232 { 00:10:57.232 "dma_device_id": "system", 00:10:57.232 "dma_device_type": 1 00:10:57.232 }, 00:10:57.232 { 00:10:57.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.232 "dma_device_type": 2 00:10:57.232 } 00:10:57.232 ], 00:10:57.232 "driver_specific": {} 00:10:57.232 } 00:10:57.232 ]' 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:57.232 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:57.233 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:57.233 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.233 19:01:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.143 19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.143 19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.144 19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.144 19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.144 19:01:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.048 19:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.048 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.307 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.567 19:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.508 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:02.508 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.508 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.508 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.508 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.768 ************************************ 00:11:02.768 START TEST filesystem_ext4 00:11:02.768 ************************************ 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:02.768 19:01:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.768 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.768 Discarding device blocks: 0/522240 done 00:11:02.768 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.768 Filesystem UUID: efc25ab6-0cc0-4e14-9db2-bf246e3bfeca 00:11:02.768 Superblock backups stored on blocks: 00:11:02.768 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.768 00:11:02.768 Allocating group tables: 0/64 done 00:11:02.768 Writing inode tables: 0/64 done 00:11:03.028 Creating journal (8192 blocks): done 00:11:03.028 Writing superblocks and filesystem accounting information: 0/64 done 00:11:03.028 00:11:03.028 19:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:03.028 19:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2819874 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.613 00:11:09.613 real 0m6.372s 00:11:09.613 user 0m0.029s 00:11:09.613 sys 0m0.077s 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.613 ************************************ 00:11:09.613 END TEST filesystem_ext4 00:11:09.613 ************************************ 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.613 ************************************ 00:11:09.613 START TEST filesystem_btrfs 00:11:09.613 ************************************ 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:09.613 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.613 btrfs-progs v6.8.1 00:11:09.613 See https://btrfs.readthedocs.io for more information. 00:11:09.613 00:11:09.613 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.613 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.613 this does not affect your deployments: 00:11:09.613 - DUP for metadata (-m dup) 00:11:09.613 - enabled no-holes (-O no-holes) 00:11:09.613 - enabled free-space-tree (-R free-space-tree) 00:11:09.613 00:11:09.613 Label: (null) 00:11:09.613 UUID: 3ebbbdf6-7eb5-4bac-8a7b-e5bfc380a3ff 00:11:09.613 Node size: 16384 00:11:09.613 Sector size: 4096 (CPU page size: 4096) 00:11:09.613 Filesystem size: 510.00MiB 00:11:09.613 Block group profiles: 00:11:09.613 Data: single 8.00MiB 00:11:09.613 Metadata: DUP 32.00MiB 00:11:09.614 System: DUP 8.00MiB 00:11:09.614 SSD detected: yes 00:11:09.614 Zoned device: no 00:11:09.614 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.614 Checksum: crc32c 00:11:09.614 Number of devices: 1 00:11:09.614 Devices: 00:11:09.614 ID SIZE PATH 00:11:09.614 1 510.00MiB /dev/nvme0n1p1 00:11:09.614 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2819874 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.614 00:11:09.614 real 0m0.573s 00:11:09.614 user 0m0.036s 00:11:09.614 sys 0m0.114s 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:09.614 ************************************ 00:11:09.614 END TEST filesystem_btrfs 00:11:09.614 ************************************ 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.614 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.874 ************************************ 00:11:09.874 START TEST filesystem_xfs 00:11:09.874 ************************************ 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:09.874 19:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:09.874 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:09.874 = sectsz=512 attr=2, projid32bit=1 00:11:09.874 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:09.874 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:09.874 data = bsize=4096 blocks=130560, imaxpct=25 00:11:09.874 = sunit=0 swidth=0 blks 00:11:09.874 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:09.874 log =internal log bsize=4096 blocks=16384, version=2 00:11:09.874 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:09.874 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:10.811 Discarding blocks...Done. 00:11:10.811 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:10.811 19:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2819874 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.371 00:11:13.371 real 0m3.376s 00:11:13.371 user 0m0.029s 00:11:13.371 sys 0m0.077s 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.371 ************************************ 00:11:13.371 END TEST filesystem_xfs 00:11:13.371 ************************************ 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.371 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.631 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2819874 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2819874 ']' 00:11:13.632 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2819874 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819874 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819874' 00:11:13.893 killing process with pid 2819874 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2819874 00:11:13.893 19:01:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2819874 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:14.155 00:11:14.155 real 0m17.947s 00:11:14.155 user 1m10.847s 00:11:14.155 sys 0m1.457s 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.155 ************************************ 00:11:14.155 END TEST nvmf_filesystem_no_in_capsule 00:11:14.155 ************************************ 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.155 ************************************ 00:11:14.155 START TEST nvmf_filesystem_in_capsule 00:11:14.155 ************************************ 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2823745 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2823745 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2823745 ']' 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.155 19:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.155 [2024-11-26 19:01:31.261424] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:11:14.155 [2024-11-26 19:01:31.261482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.155 [2024-11-26 19:01:31.352058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.415 [2024-11-26 19:01:31.383130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.415 [2024-11-26 19:01:31.383163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.415 [2024-11-26 19:01:31.383169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.415 [2024-11-26 19:01:31.383174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.415 [2024-11-26 19:01:31.383178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.415 [2024-11-26 19:01:31.384418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.415 [2024-11-26 19:01:31.384626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.415 [2024-11-26 19:01:31.384778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.415 [2024-11-26 19:01:31.384779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 [2024-11-26 19:01:32.098918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.986 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 Malloc1 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 [2024-11-26 19:01:32.228883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.246 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:15.246 { 00:11:15.246 "name": "Malloc1", 00:11:15.246 "aliases": [ 00:11:15.246 "1fc12cfd-625c-4ef9-a00e-a2b98568e8a6" 00:11:15.246 ], 00:11:15.246 "product_name": "Malloc disk", 00:11:15.246 "block_size": 512, 00:11:15.246 "num_blocks": 1048576, 00:11:15.246 "uuid": "1fc12cfd-625c-4ef9-a00e-a2b98568e8a6", 00:11:15.246 "assigned_rate_limits": { 00:11:15.246 "rw_ios_per_sec": 0, 00:11:15.246 "rw_mbytes_per_sec": 0, 00:11:15.246 "r_mbytes_per_sec": 0, 00:11:15.246 "w_mbytes_per_sec": 0 00:11:15.246 }, 00:11:15.246 "claimed": true, 00:11:15.246 "claim_type": "exclusive_write", 00:11:15.246 "zoned": false, 00:11:15.246 "supported_io_types": { 00:11:15.246 "read": true, 00:11:15.246 "write": true, 00:11:15.246 "unmap": true, 00:11:15.246 "flush": true, 00:11:15.246 "reset": true, 00:11:15.246 "nvme_admin": false, 00:11:15.246 "nvme_io": false, 00:11:15.246 "nvme_io_md": false, 00:11:15.246 "write_zeroes": true, 00:11:15.246 "zcopy": true, 00:11:15.246 "get_zone_info": false, 00:11:15.246 "zone_management": false, 00:11:15.246 "zone_append": false, 00:11:15.246 "compare": false, 00:11:15.246 "compare_and_write": false, 00:11:15.246 "abort": true, 00:11:15.247 "seek_hole": false, 00:11:15.247 "seek_data": false, 00:11:15.247 "copy": true, 00:11:15.247 "nvme_iov_md": false 00:11:15.247 }, 00:11:15.247 "memory_domains": [ 00:11:15.247 { 00:11:15.247 "dma_device_id": "system", 00:11:15.247 "dma_device_type": 1 00:11:15.247 }, 00:11:15.247 { 00:11:15.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.247 "dma_device_type": 2 00:11:15.247 } 00:11:15.247 ], 00:11:15.247 "driver_specific": {} 00:11:15.247 } 00:11:15.247 ]' 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.247 19:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.157 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.157 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.157 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.157 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.157 19:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.067 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.068 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.068 19:01:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.068 19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.010 19:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.953 ************************************ 00:11:20.953 START TEST filesystem_in_capsule_ext4 00:11:20.953 ************************************ 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:20.953 19:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.953 mke2fs 1.47.0 (5-Feb-2023) 00:11:20.953 Discarding device blocks: 0/522240 done 00:11:20.953 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.953 Filesystem UUID: dbead65d-3439-4fa6-9f22-3aaf50d97b5d 00:11:20.953 Superblock backups stored on blocks: 00:11:20.953 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.953 00:11:20.953 Allocating group tables: 0/64 done 00:11:20.953 Writing inode tables: 0/64 done 00:11:23.497 Creating journal (8192 blocks): done 00:11:23.497 Writing superblocks and filesystem accounting information: 0/64 done 00:11:23.497 00:11:23.497 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:23.497 19:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2823745 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.798 00:11:28.798 real 0m7.881s 00:11:28.798 user 0m0.030s 00:11:28.798 sys 0m0.078s 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 ************************************ 00:11:28.798 END TEST filesystem_in_capsule_ext4 00:11:28.798 ************************************ 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.798 ************************************ 00:11:28.798 START TEST filesystem_in_capsule_btrfs 00:11:28.798 ************************************ 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.798 19:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:29.370 btrfs-progs v6.8.1 00:11:29.370 See https://btrfs.readthedocs.io for more information. 00:11:29.370 00:11:29.370 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:29.370 NOTE: several default settings have changed in version 5.15, please make sure 00:11:29.370 this does not affect your deployments: 00:11:29.370 - DUP for metadata (-m dup) 00:11:29.370 - enabled no-holes (-O no-holes) 00:11:29.370 - enabled free-space-tree (-R free-space-tree) 00:11:29.370 00:11:29.370 Label: (null) 00:11:29.370 UUID: fdff6d11-02f3-43ad-ab69-1664f735c541 00:11:29.370 Node size: 16384 00:11:29.370 Sector size: 4096 (CPU page size: 4096) 00:11:29.370 Filesystem size: 510.00MiB 00:11:29.370 Block group profiles: 00:11:29.370 Data: single 8.00MiB 00:11:29.370 Metadata: DUP 32.00MiB 00:11:29.370 System: DUP 8.00MiB 00:11:29.370 SSD detected: yes 00:11:29.370 Zoned device: no 00:11:29.370 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:29.370 Checksum: crc32c 00:11:29.370 Number of devices: 1 00:11:29.370 Devices: 00:11:29.370 ID SIZE PATH 00:11:29.370 1 510.00MiB /dev/nvme0n1p1 00:11:29.370 00:11:29.370 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:29.370 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.942 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.942 19:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2823745 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.942 00:11:29.942 real 0m1.145s 00:11:29.942 user 0m0.021s 00:11:29.942 sys 0m0.130s 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.942 ************************************ 00:11:29.942 END TEST filesystem_in_capsule_btrfs 00:11:29.942 ************************************ 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.942 ************************************ 00:11:29.942 START TEST filesystem_in_capsule_xfs 00:11:29.942 ************************************ 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.942 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:30.204 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:30.204 = sectsz=512 attr=2, projid32bit=1 00:11:30.204 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:30.204 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:30.204 data = bsize=4096 blocks=130560, imaxpct=25 00:11:30.204 = sunit=0 swidth=0 blks 00:11:30.204 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:30.204 log =internal log bsize=4096 blocks=16384, version=2 00:11:30.204 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:30.204 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:30.775 Discarding blocks...Done. 00:11:31.038 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.038 19:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2823745 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.953 00:11:32.953 real 0m2.720s 00:11:32.953 user 0m0.023s 00:11:32.953 sys 0m0.082s 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.953 ************************************ 00:11:32.953 END TEST filesystem_in_capsule_xfs 00:11:32.953 ************************************ 00:11:32.953 19:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2823745 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2823745 ']' 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2823745 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823745 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823745' 00:11:33.214 killing process with pid 2823745 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2823745 00:11:33.214 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2823745 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.475 00:11:33.475 real 0m19.400s 00:11:33.475 user 1m16.699s 00:11:33.475 sys 0m1.461s 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.475 ************************************ 00:11:33.475 END TEST nvmf_filesystem_in_capsule 00:11:33.475 ************************************ 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.475 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.475 rmmod nvme_tcp 00:11:33.475 rmmod nvme_fabrics 00:11:33.475 rmmod nvme_keyring 00:11:33.735 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.735 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:33.735 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:33.735 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.736 19:01:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.649 00:11:35.649 real 0m47.690s 00:11:35.649 user 2m29.876s 00:11:35.649 sys 0m8.888s 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.649 ************************************ 00:11:35.649 END TEST nvmf_filesystem 00:11:35.649 ************************************ 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.649 19:01:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.911 ************************************ 00:11:35.911 START TEST nvmf_target_discovery 00:11:35.911 ************************************ 00:11:35.911 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:35.911 * Looking for test storage... 00:11:35.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.911 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.911 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.911 19:01:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.911 --rc genhtml_branch_coverage=1 00:11:35.911 --rc genhtml_function_coverage=1 00:11:35.911 --rc genhtml_legend=1 00:11:35.911 --rc geninfo_all_blocks=1 00:11:35.911 --rc geninfo_unexecuted_blocks=1 00:11:35.911 00:11:35.911 ' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.911 --rc genhtml_branch_coverage=1 00:11:35.911 --rc genhtml_function_coverage=1 00:11:35.911 --rc genhtml_legend=1 00:11:35.911 --rc geninfo_all_blocks=1 00:11:35.911 --rc geninfo_unexecuted_blocks=1 00:11:35.911 00:11:35.911 ' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.911 --rc genhtml_branch_coverage=1 00:11:35.911 --rc genhtml_function_coverage=1 00:11:35.911 --rc genhtml_legend=1 00:11:35.911 --rc geninfo_all_blocks=1 00:11:35.911 --rc geninfo_unexecuted_blocks=1 00:11:35.911 00:11:35.911 ' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.911 --rc genhtml_branch_coverage=1 00:11:35.911 --rc genhtml_function_coverage=1 00:11:35.911 --rc genhtml_legend=1 00:11:35.911 --rc geninfo_all_blocks=1 00:11:35.911 --rc geninfo_unexecuted_blocks=1 00:11:35.911 00:11:35.911 ' 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.911 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.912 19:01:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.055 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:44.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:44.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:44.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:44.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:11:44.056 00:11:44.056 --- 10.0.0.2 ping statistics --- 00:11:44.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.056 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:11:44.056 00:11:44.056 --- 10.0.0.1 ping statistics --- 00:11:44.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.056 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2831798 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2831798 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2831798 ']' 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.056 19:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.056 [2024-11-26 19:02:00.690616] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:11:44.056 [2024-11-26 19:02:00.690684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.056 [2024-11-26 19:02:00.790905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.056 [2024-11-26 19:02:00.844146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.056 [2024-11-26 19:02:00.844213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.057 [2024-11-26 19:02:00.844222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.057 [2024-11-26 19:02:00.844229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.057 [2024-11-26 19:02:00.844236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.057 [2024-11-26 19:02:00.846393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.057 [2024-11-26 19:02:00.846554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.057 [2024-11-26 19:02:00.846713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.057 [2024-11-26 19:02:00.846713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.318 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.318 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:44.318 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.318 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.318 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.581 [2024-11-26 19:02:01.572631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.581 Null1 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.581 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 [2024-11-26 19:02:01.646462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 Null2 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 Null3 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 Null4 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.582 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.844 19:02:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:44.844 00:11:44.844 Discovery Log Number of Records 6, Generation counter 6 00:11:44.844 =====Discovery Log Entry 0====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: current discovery subsystem 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4420 00:11:44.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: explicit discovery connections, duplicate discovery information 00:11:44.844 sectype: none 00:11:44.844 =====Discovery Log Entry 1====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: nvme subsystem 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4420 00:11:44.844 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: none 00:11:44.844 sectype: none 00:11:44.844 =====Discovery Log Entry 2====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: nvme subsystem 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4420 00:11:44.844 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: none 00:11:44.844 sectype: none 00:11:44.844 =====Discovery Log Entry 3====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: nvme subsystem 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4420 00:11:44.844 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: none 00:11:44.844 sectype: none 00:11:44.844 =====Discovery Log Entry 4====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: nvme subsystem 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4420 00:11:44.844 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: none 00:11:44.844 sectype: none 00:11:44.844 =====Discovery Log Entry 5====== 00:11:44.844 trtype: tcp 00:11:44.844 adrfam: ipv4 00:11:44.844 subtype: discovery subsystem referral 00:11:44.844 treq: not required 00:11:44.844 portid: 0 00:11:44.844 trsvcid: 4430 00:11:44.844 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:44.844 traddr: 10.0.0.2 00:11:44.844 eflags: none 00:11:44.844 sectype: none 00:11:44.844 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:44.844 Perform nvmf subsystem discovery via RPC 00:11:44.844 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:44.844 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.844 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.844 [ 00:11:45.106 { 00:11:45.106 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:45.106 "subtype": "Discovery", 00:11:45.106 "listen_addresses": [ 00:11:45.106 { 00:11:45.106 "trtype": "TCP", 00:11:45.106 "adrfam": "IPv4", 00:11:45.106 "traddr": "10.0.0.2", 00:11:45.106 "trsvcid": "4420" 00:11:45.106 } 00:11:45.106 ], 00:11:45.106 "allow_any_host": true, 00:11:45.106 "hosts": [] 00:11:45.106 }, 00:11:45.106 { 00:11:45.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:45.106 "subtype": "NVMe", 00:11:45.106 "listen_addresses": [ 00:11:45.106 { 00:11:45.106 "trtype": "TCP", 00:11:45.106 "adrfam": "IPv4", 00:11:45.106 "traddr": "10.0.0.2", 00:11:45.106 "trsvcid": "4420" 00:11:45.106 } 00:11:45.106 ], 00:11:45.106 "allow_any_host": true, 00:11:45.106 "hosts": [], 00:11:45.106 "serial_number": "SPDK00000000000001", 00:11:45.106 "model_number": "SPDK bdev Controller", 00:11:45.106 "max_namespaces": 32, 00:11:45.106 "min_cntlid": 1, 00:11:45.106 "max_cntlid": 65519, 00:11:45.106 "namespaces": [ 00:11:45.106 { 00:11:45.106 "nsid": 1, 00:11:45.106 "bdev_name": "Null1", 00:11:45.106 "name": "Null1", 00:11:45.106 "nguid": "D445712049E44D008BE148AEF24010D3", 00:11:45.106 "uuid": "d4457120-49e4-4d00-8be1-48aef24010d3" 00:11:45.106 } 00:11:45.106 ] 00:11:45.106 }, 00:11:45.106 { 00:11:45.106 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:45.106 "subtype": "NVMe", 00:11:45.106 "listen_addresses": [ 00:11:45.106 { 00:11:45.106 "trtype": "TCP", 00:11:45.106 "adrfam": "IPv4", 00:11:45.106 "traddr": "10.0.0.2", 00:11:45.106 "trsvcid": "4420" 00:11:45.106 } 00:11:45.106 ], 00:11:45.106 "allow_any_host": true, 00:11:45.106 "hosts": [], 00:11:45.106 "serial_number": "SPDK00000000000002", 00:11:45.106 "model_number": "SPDK bdev Controller", 00:11:45.106 "max_namespaces": 32, 00:11:45.106 "min_cntlid": 1, 00:11:45.106 "max_cntlid": 65519, 00:11:45.106 "namespaces": [ 00:11:45.106 { 00:11:45.106 "nsid": 1, 00:11:45.106 "bdev_name": "Null2", 00:11:45.106 "name": "Null2", 00:11:45.106 "nguid": "5B7C97A410904559BD0149812E998FD4", 00:11:45.106 "uuid": "5b7c97a4-1090-4559-bd01-49812e998fd4" 00:11:45.106 } 00:11:45.106 ] 00:11:45.106 }, 00:11:45.106 { 00:11:45.106 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:45.106 "subtype": "NVMe", 00:11:45.106 "listen_addresses": [ 00:11:45.106 { 00:11:45.106 "trtype": "TCP", 00:11:45.106 "adrfam": "IPv4", 00:11:45.106 "traddr": "10.0.0.2", 00:11:45.106 "trsvcid": "4420" 00:11:45.106 } 00:11:45.106 ], 00:11:45.106 "allow_any_host": true, 00:11:45.106 "hosts": [], 00:11:45.106 "serial_number": "SPDK00000000000003", 00:11:45.106 "model_number": "SPDK bdev Controller", 00:11:45.106 "max_namespaces": 32, 00:11:45.106 "min_cntlid": 1, 00:11:45.106 "max_cntlid": 65519, 00:11:45.106 "namespaces": [ 00:11:45.106 { 00:11:45.106 "nsid": 1, 00:11:45.106 "bdev_name": "Null3", 00:11:45.106 "name": "Null3", 00:11:45.106 "nguid": "F863022409CB480796EB187E56FF307A", 00:11:45.106 "uuid": "f8630224-09cb-4807-96eb-187e56ff307a" 00:11:45.106 } 00:11:45.106 ] 00:11:45.106 }, 00:11:45.106 { 00:11:45.106 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:45.106 "subtype": "NVMe", 00:11:45.106 "listen_addresses": [ 00:11:45.106 { 00:11:45.106 "trtype": "TCP", 00:11:45.106 "adrfam": "IPv4", 00:11:45.106 "traddr": "10.0.0.2", 00:11:45.106 "trsvcid": "4420" 00:11:45.106 } 00:11:45.106 ], 00:11:45.106 "allow_any_host": true, 00:11:45.106 "hosts": [], 00:11:45.106 "serial_number": "SPDK00000000000004", 00:11:45.106 "model_number": "SPDK bdev Controller", 00:11:45.106 "max_namespaces": 32, 00:11:45.106 "min_cntlid": 1, 00:11:45.106 "max_cntlid": 65519, 00:11:45.106 "namespaces": [ 00:11:45.106 { 00:11:45.106 "nsid": 1, 00:11:45.106 "bdev_name": "Null4", 00:11:45.106 "name": "Null4", 00:11:45.106 "nguid": "EA1CFC32A2CC4E048E100B75518688D6", 00:11:45.106 "uuid": "ea1cfc32-a2cc-4e04-8e10-0b75518688d6" 00:11:45.106 } 00:11:45.106 ] 00:11:45.106 } 00:11:45.106 ] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.107 rmmod nvme_tcp 00:11:45.107 rmmod nvme_fabrics 00:11:45.107 rmmod nvme_keyring 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2831798 ']' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2831798 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2831798 ']' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2831798 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.107 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831798 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831798' 00:11:45.377 killing process with pid 2831798 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2831798 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2831798 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.377 19:02:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.431 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.431 00:11:47.431 real 0m11.756s 00:11:47.431 user 0m9.092s 00:11:47.431 sys 0m6.165s 00:11:47.431 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.431 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.431 ************************************ 00:11:47.431 END TEST nvmf_target_discovery 00:11:47.431 ************************************ 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.692 ************************************ 00:11:47.692 START TEST nvmf_referrals 00:11:47.692 ************************************ 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:47.692 * Looking for test storage... 00:11:47.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.692 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:47.693 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.954 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.955 --rc genhtml_branch_coverage=1 00:11:47.955 --rc genhtml_function_coverage=1 00:11:47.955 --rc genhtml_legend=1 00:11:47.955 --rc geninfo_all_blocks=1 00:11:47.955 --rc geninfo_unexecuted_blocks=1 00:11:47.955 00:11:47.955 ' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.955 --rc genhtml_branch_coverage=1 00:11:47.955 --rc genhtml_function_coverage=1 00:11:47.955 --rc genhtml_legend=1 00:11:47.955 --rc geninfo_all_blocks=1 00:11:47.955 --rc geninfo_unexecuted_blocks=1 00:11:47.955 00:11:47.955 ' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.955 --rc genhtml_branch_coverage=1 00:11:47.955 --rc genhtml_function_coverage=1 00:11:47.955 --rc genhtml_legend=1 00:11:47.955 --rc geninfo_all_blocks=1 00:11:47.955 --rc geninfo_unexecuted_blocks=1 00:11:47.955 00:11:47.955 ' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.955 --rc genhtml_branch_coverage=1 00:11:47.955 --rc genhtml_function_coverage=1 00:11:47.955 --rc genhtml_legend=1 00:11:47.955 --rc geninfo_all_blocks=1 00:11:47.955 --rc geninfo_unexecuted_blocks=1 00:11:47.955 00:11:47.955 ' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.955 19:02:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.090 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:11:56.091 00:11:56.091 --- 10.0.0.2 ping statistics --- 00:11:56.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.091 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:11:56.091 00:11:56.091 --- 10.0.0.1 ping statistics --- 00:11:56.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.091 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.091 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2836411 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2836411 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2836411 ']' 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.092 19:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.092 [2024-11-26 19:02:12.598795] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:11:56.092 [2024-11-26 19:02:12.598860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.092 [2024-11-26 19:02:12.701915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.092 [2024-11-26 19:02:12.754366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.092 [2024-11-26 19:02:12.754418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.092 [2024-11-26 19:02:12.754427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.092 [2024-11-26 19:02:12.754435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.092 [2024-11-26 19:02:12.754442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.092 [2024-11-26 19:02:12.756864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.092 [2024-11-26 19:02:12.757026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.092 [2024-11-26 19:02:12.757218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.092 [2024-11-26 19:02:12.757268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 [2024-11-26 19:02:13.478862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 [2024-11-26 19:02:13.511454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.354 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.616 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.877 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.878 19:02:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.138 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.398 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.658 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.919 19:02:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.919 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.179 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.440 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.701 rmmod nvme_tcp 00:11:58.701 rmmod nvme_fabrics 00:11:58.701 rmmod nvme_keyring 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2836411 ']' 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2836411 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2836411 ']' 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2836411 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836411 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836411' 00:11:58.701 killing process with pid 2836411 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2836411 00:11:58.701 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2836411 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.962 19:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.872 00:12:00.872 real 0m13.322s 00:12:00.872 user 0m15.847s 00:12:00.872 sys 0m6.646s 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.872 ************************************ 00:12:00.872 END TEST nvmf_referrals 00:12:00.872 ************************************ 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.872 19:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.133 ************************************ 00:12:01.133 START TEST nvmf_connect_disconnect 00:12:01.133 ************************************ 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:01.133 * Looking for test storage... 00:12:01.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:01.133 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.134 --rc genhtml_branch_coverage=1 00:12:01.134 --rc genhtml_function_coverage=1 00:12:01.134 --rc genhtml_legend=1 00:12:01.134 --rc geninfo_all_blocks=1 00:12:01.134 --rc geninfo_unexecuted_blocks=1 00:12:01.134 00:12:01.134 ' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.134 --rc genhtml_branch_coverage=1 00:12:01.134 --rc genhtml_function_coverage=1 00:12:01.134 --rc genhtml_legend=1 00:12:01.134 --rc geninfo_all_blocks=1 00:12:01.134 --rc geninfo_unexecuted_blocks=1 00:12:01.134 00:12:01.134 ' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.134 --rc genhtml_branch_coverage=1 00:12:01.134 --rc genhtml_function_coverage=1 00:12:01.134 --rc genhtml_legend=1 00:12:01.134 --rc geninfo_all_blocks=1 00:12:01.134 --rc geninfo_unexecuted_blocks=1 00:12:01.134 00:12:01.134 ' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.134 --rc genhtml_branch_coverage=1 00:12:01.134 --rc genhtml_function_coverage=1 00:12:01.134 --rc genhtml_legend=1 00:12:01.134 --rc geninfo_all_blocks=1 00:12:01.134 --rc geninfo_unexecuted_blocks=1 00:12:01.134 00:12:01.134 ' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.134 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.395 19:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:09.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:09.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:09.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.535 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:09.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:12:09.536 00:12:09.536 --- 10.0.0.2 ping statistics --- 00:12:09.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.536 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:09.536 00:12:09.536 --- 10.0.0.1 ping statistics --- 00:12:09.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.536 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2841340 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2841340 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2841340 ']' 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.536 19:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.536 [2024-11-26 19:02:25.904218] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:12:09.536 [2024-11-26 19:02:25.904287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.536 [2024-11-26 19:02:26.003484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.536 [2024-11-26 19:02:26.056967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.536 [2024-11-26 19:02:26.057020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.536 [2024-11-26 19:02:26.057028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.536 [2024-11-26 19:02:26.057035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.536 [2024-11-26 19:02:26.057042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.536 [2024-11-26 19:02:26.059469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.536 [2024-11-26 19:02:26.059629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.536 [2024-11-26 19:02:26.059791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.536 [2024-11-26 19:02:26.059791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.536 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.536 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:09.536 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.536 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.536 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 [2024-11-26 19:02:26.785698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.797 [2024-11-26 19:02:26.857651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:09.797 19:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:14.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.116 rmmod nvme_tcp 00:12:28.116 rmmod nvme_fabrics 00:12:28.116 rmmod nvme_keyring 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2841340 ']' 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2841340 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2841340 ']' 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2841340 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841340 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841340' 00:12:28.116 killing process with pid 2841340 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2841340 00:12:28.116 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2841340 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.377 19:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.292 00:12:30.292 real 0m29.320s 00:12:30.292 user 1m18.992s 00:12:30.292 sys 0m7.142s 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.292 ************************************ 00:12:30.292 END TEST nvmf_connect_disconnect 00:12:30.292 ************************************ 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.292 19:02:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.554 ************************************ 00:12:30.554 START TEST nvmf_multitarget 00:12:30.554 ************************************ 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.554 * Looking for test storage... 00:12:30.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.554 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:30.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.554 --rc genhtml_branch_coverage=1 00:12:30.554 --rc genhtml_function_coverage=1 00:12:30.554 --rc genhtml_legend=1 00:12:30.554 --rc geninfo_all_blocks=1 00:12:30.554 --rc geninfo_unexecuted_blocks=1 00:12:30.554 00:12:30.555 ' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:30.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.555 --rc genhtml_branch_coverage=1 00:12:30.555 --rc genhtml_function_coverage=1 00:12:30.555 --rc genhtml_legend=1 00:12:30.555 --rc geninfo_all_blocks=1 00:12:30.555 --rc geninfo_unexecuted_blocks=1 00:12:30.555 00:12:30.555 ' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:30.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.555 --rc genhtml_branch_coverage=1 00:12:30.555 --rc genhtml_function_coverage=1 00:12:30.555 --rc genhtml_legend=1 00:12:30.555 --rc geninfo_all_blocks=1 00:12:30.555 --rc geninfo_unexecuted_blocks=1 00:12:30.555 00:12:30.555 ' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:30.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.555 --rc genhtml_branch_coverage=1 00:12:30.555 --rc genhtml_function_coverage=1 00:12:30.555 --rc genhtml_legend=1 00:12:30.555 --rc geninfo_all_blocks=1 00:12:30.555 --rc geninfo_unexecuted_blocks=1 00:12:30.555 00:12:30.555 ' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.555 19:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:38.699 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:38.699 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.699 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:38.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:38.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.700 19:02:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:12:38.700 00:12:38.700 --- 10.0.0.2 ping statistics --- 00:12:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.700 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:38.700 00:12:38.700 --- 10.0.0.1 ping statistics --- 00:12:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.700 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2849312 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2849312 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2849312 ']' 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.700 19:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.700 [2024-11-26 19:02:55.322512] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:12:38.700 [2024-11-26 19:02:55.322580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.700 [2024-11-26 19:02:55.423104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.700 [2024-11-26 19:02:55.476359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.701 [2024-11-26 19:02:55.476411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.701 [2024-11-26 19:02:55.476420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.701 [2024-11-26 19:02:55.476428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.701 [2024-11-26 19:02:55.476434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.701 [2024-11-26 19:02:55.478755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.701 [2024-11-26 19:02:55.478916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.701 [2024-11-26 19:02:55.479077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.701 [2024-11-26 19:02:55.479077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.962 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.962 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:38.962 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.962 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.962 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:39.224 "nvmf_tgt_1" 00:12:39.224 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:39.486 "nvmf_tgt_2" 00:12:39.486 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.486 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:39.486 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:39.486 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:39.747 true 00:12:39.747 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:39.747 true 00:12:39.747 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.747 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:40.007 19:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.007 rmmod nvme_tcp 00:12:40.007 rmmod nvme_fabrics 00:12:40.007 rmmod nvme_keyring 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2849312 ']' 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2849312 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2849312 ']' 00:12:40.007 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2849312 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2849312 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2849312' 00:12:40.008 killing process with pid 2849312 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2849312 00:12:40.008 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2849312 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.269 19:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.183 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.183 00:12:42.183 real 0m11.876s 00:12:42.183 user 0m10.263s 00:12:42.183 sys 0m6.242s 00:12:42.183 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.183 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.183 ************************************ 00:12:42.183 END TEST nvmf_multitarget 00:12:42.183 ************************************ 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.444 ************************************ 00:12:42.444 START TEST nvmf_rpc 00:12:42.444 ************************************ 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.444 * Looking for test storage... 00:12:42.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.444 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.705 --rc genhtml_branch_coverage=1 00:12:42.705 --rc genhtml_function_coverage=1 00:12:42.705 --rc genhtml_legend=1 00:12:42.705 --rc geninfo_all_blocks=1 00:12:42.705 --rc geninfo_unexecuted_blocks=1 00:12:42.705 00:12:42.705 ' 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.705 --rc genhtml_branch_coverage=1 00:12:42.705 --rc genhtml_function_coverage=1 00:12:42.705 --rc genhtml_legend=1 00:12:42.705 --rc geninfo_all_blocks=1 00:12:42.705 --rc geninfo_unexecuted_blocks=1 00:12:42.705 00:12:42.705 ' 00:12:42.705 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.705 --rc genhtml_branch_coverage=1 00:12:42.705 --rc genhtml_function_coverage=1 00:12:42.706 --rc genhtml_legend=1 00:12:42.706 --rc geninfo_all_blocks=1 00:12:42.706 --rc geninfo_unexecuted_blocks=1 00:12:42.706 00:12:42.706 ' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.706 --rc genhtml_branch_coverage=1 00:12:42.706 --rc genhtml_function_coverage=1 00:12:42.706 --rc genhtml_legend=1 00:12:42.706 --rc geninfo_all_blocks=1 00:12:42.706 --rc geninfo_unexecuted_blocks=1 00:12:42.706 00:12:42.706 ' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.706 19:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:50.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.850 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:50.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:50.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:50.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.851 19:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:12:50.851 00:12:50.851 --- 10.0.0.2 ping statistics --- 00:12:50.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.851 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:12:50.851 00:12:50.851 --- 10.0.0.1 ping statistics --- 00:12:50.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.851 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2854117 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2854117 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2854117 ']' 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.851 [2024-11-26 19:03:07.408456] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:12:50.851 [2024-11-26 19:03:07.408525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.851 [2024-11-26 19:03:07.483761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.851 [2024-11-26 19:03:07.531200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.851 [2024-11-26 19:03:07.531254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.851 [2024-11-26 19:03:07.531261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.851 [2024-11-26 19:03:07.531267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.851 [2024-11-26 19:03:07.531271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.851 [2024-11-26 19:03:07.533037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.851 [2024-11-26 19:03:07.533212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.851 [2024-11-26 19:03:07.533398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.851 [2024-11-26 19:03:07.533496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.851 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:50.852 "tick_rate": 2400000000, 00:12:50.852 "poll_groups": [ 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_000", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_001", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_002", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_003", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [] 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 }' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 [2024-11-26 19:03:07.816704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:50.852 "tick_rate": 2400000000, 00:12:50.852 "poll_groups": [ 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_000", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [ 00:12:50.852 { 00:12:50.852 "trtype": "TCP" 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_001", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [ 00:12:50.852 { 00:12:50.852 "trtype": "TCP" 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_002", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [ 00:12:50.852 { 00:12:50.852 "trtype": "TCP" 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 }, 00:12:50.852 { 00:12:50.852 "name": "nvmf_tgt_poll_group_003", 00:12:50.852 "admin_qpairs": 0, 00:12:50.852 "io_qpairs": 0, 00:12:50.852 "current_admin_qpairs": 0, 00:12:50.852 "current_io_qpairs": 0, 00:12:50.852 "pending_bdev_io": 0, 00:12:50.852 "completed_nvme_io": 0, 00:12:50.852 "transports": [ 00:12:50.852 { 00:12:50.852 "trtype": "TCP" 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 } 00:12:50.852 ] 00:12:50.852 }' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 Malloc1 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.852 [2024-11-26 19:03:08.019103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:50.852 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:50.853 [2024-11-26 19:03:08.056138] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:51.114 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:51.114 could not add new controller: failed to write to nvme-fabrics device 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.114 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.500 19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.500 19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:52.500 19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.500 19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:52.500 19:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:54.507 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.833 [2024-11-26 19:03:11.810807] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:54.833 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:54.833 could not add new controller: failed to write to nvme-fabrics device 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.833 19:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.218 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.218 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.218 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.218 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.219 19:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:58.833 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.834 [2024-11-26 19:03:15.578693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.834 19:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.220 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.220 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.220 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.220 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:00.220 19:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.132 [2024-11-26 19:03:19.334206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.132 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.392 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.777 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.777 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:03.777 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.777 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:03.777 19:03:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:05.691 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:05.953 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 [2024-11-26 19:03:23.049995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.953 19:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.869 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.869 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:07.869 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.869 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:07.869 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 [2024-11-26 19:03:26.803324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.783 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.167 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.167 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.167 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.167 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.167 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.711 [2024-11-26 19:03:30.526354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.711 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.712 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.097 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.097 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.097 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.097 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:15.097 19:03:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.011 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.012 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.012 [2024-11-26 19:03:34.206112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.012 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.012 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.012 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.012 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 [2024-11-26 19:03:34.270260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 [2024-11-26 19:03:34.342480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.275 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 [2024-11-26 19:03:34.414709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.276 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 [2024-11-26 19:03:34.482936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:17.538 "tick_rate": 2400000000, 00:13:17.538 "poll_groups": [ 00:13:17.538 { 00:13:17.538 "name": "nvmf_tgt_poll_group_000", 00:13:17.538 "admin_qpairs": 0, 00:13:17.538 "io_qpairs": 224, 00:13:17.538 "current_admin_qpairs": 0, 00:13:17.538 "current_io_qpairs": 0, 00:13:17.538 "pending_bdev_io": 0, 00:13:17.538 "completed_nvme_io": 225, 00:13:17.538 "transports": [ 00:13:17.538 { 00:13:17.538 "trtype": "TCP" 00:13:17.538 } 00:13:17.538 ] 00:13:17.538 }, 00:13:17.538 { 00:13:17.538 "name": "nvmf_tgt_poll_group_001", 00:13:17.538 "admin_qpairs": 1, 00:13:17.538 "io_qpairs": 223, 00:13:17.538 "current_admin_qpairs": 0, 00:13:17.538 "current_io_qpairs": 0, 00:13:17.538 "pending_bdev_io": 0, 00:13:17.538 "completed_nvme_io": 226, 00:13:17.538 "transports": [ 00:13:17.538 { 00:13:17.538 "trtype": "TCP" 00:13:17.538 } 00:13:17.538 ] 00:13:17.538 }, 00:13:17.538 { 00:13:17.538 "name": "nvmf_tgt_poll_group_002", 00:13:17.538 "admin_qpairs": 6, 00:13:17.538 "io_qpairs": 218, 00:13:17.538 "current_admin_qpairs": 0, 00:13:17.538 "current_io_qpairs": 0, 00:13:17.538 "pending_bdev_io": 0, 00:13:17.538 "completed_nvme_io": 270, 00:13:17.538 "transports": [ 00:13:17.538 { 00:13:17.538 "trtype": "TCP" 00:13:17.538 } 00:13:17.538 ] 00:13:17.538 }, 00:13:17.538 { 00:13:17.538 "name": "nvmf_tgt_poll_group_003", 00:13:17.538 "admin_qpairs": 0, 00:13:17.538 "io_qpairs": 224, 00:13:17.538 "current_admin_qpairs": 0, 00:13:17.538 "current_io_qpairs": 0, 00:13:17.538 "pending_bdev_io": 0, 00:13:17.538 "completed_nvme_io": 518, 00:13:17.538 "transports": [ 00:13:17.538 { 00:13:17.538 "trtype": "TCP" 00:13:17.538 } 00:13:17.538 ] 00:13:17.538 } 00:13:17.538 ] 00:13:17.538 }' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.538 rmmod nvme_tcp 00:13:17.538 rmmod nvme_fabrics 00:13:17.538 rmmod nvme_keyring 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2854117 ']' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2854117 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2854117 ']' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2854117 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.538 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854117 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854117' 00:13:17.799 killing process with pid 2854117 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2854117 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2854117 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.799 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.345 19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.345 00:13:20.345 real 0m37.512s 00:13:20.345 user 1m51.467s 00:13:20.345 sys 0m7.883s 00:13:20.345 19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.345 19:03:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.345 ************************************ 00:13:20.345 END TEST nvmf_rpc 00:13:20.345 ************************************ 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.345 ************************************ 00:13:20.345 START TEST nvmf_invalid 00:13:20.345 ************************************ 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:20.345 * Looking for test storage... 00:13:20.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.345 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.346 --rc genhtml_branch_coverage=1 00:13:20.346 --rc genhtml_function_coverage=1 00:13:20.346 --rc genhtml_legend=1 00:13:20.346 --rc geninfo_all_blocks=1 00:13:20.346 --rc geninfo_unexecuted_blocks=1 00:13:20.346 00:13:20.346 ' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.346 --rc genhtml_branch_coverage=1 00:13:20.346 --rc genhtml_function_coverage=1 00:13:20.346 --rc genhtml_legend=1 00:13:20.346 --rc geninfo_all_blocks=1 00:13:20.346 --rc geninfo_unexecuted_blocks=1 00:13:20.346 00:13:20.346 ' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.346 --rc genhtml_branch_coverage=1 00:13:20.346 --rc genhtml_function_coverage=1 00:13:20.346 --rc genhtml_legend=1 00:13:20.346 --rc geninfo_all_blocks=1 00:13:20.346 --rc geninfo_unexecuted_blocks=1 00:13:20.346 00:13:20.346 ' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.346 --rc genhtml_branch_coverage=1 00:13:20.346 --rc genhtml_function_coverage=1 00:13:20.346 --rc genhtml_legend=1 00:13:20.346 --rc geninfo_all_blocks=1 00:13:20.346 --rc geninfo_unexecuted_blocks=1 00:13:20.346 00:13:20.346 ' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.346 19:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:28.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.484 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:28.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:28.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:28.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:13:28.485 00:13:28.485 --- 10.0.0.2 ping statistics --- 00:13:28.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.485 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:13:28.485 00:13:28.485 --- 10.0.0.1 ping statistics --- 00:13:28.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.485 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2864148 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2864148 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2864148 ']' 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.485 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.485 [2024-11-26 19:03:44.927565] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:13:28.485 [2024-11-26 19:03:44.927632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.485 [2024-11-26 19:03:45.029108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.485 [2024-11-26 19:03:45.081979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.485 [2024-11-26 19:03:45.082034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.485 [2024-11-26 19:03:45.082043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.485 [2024-11-26 19:03:45.082051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.485 [2024-11-26 19:03:45.082057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.485 [2024-11-26 19:03:45.084511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.485 [2024-11-26 19:03:45.084671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.485 [2024-11-26 19:03:45.084836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.485 [2024-11-26 19:03:45.084837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.746 19:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5647 00:13:29.006 [2024-11-26 19:03:45.975074] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:29.006 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:29.006 { 00:13:29.006 "nqn": "nqn.2016-06.io.spdk:cnode5647", 00:13:29.006 "tgt_name": "foobar", 00:13:29.006 "method": "nvmf_create_subsystem", 00:13:29.006 "req_id": 1 00:13:29.006 } 00:13:29.006 Got JSON-RPC error response 00:13:29.006 response: 00:13:29.006 { 00:13:29.006 "code": -32603, 00:13:29.006 "message": "Unable to find target foobar" 00:13:29.006 }' 00:13:29.006 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:29.006 { 00:13:29.006 "nqn": "nqn.2016-06.io.spdk:cnode5647", 00:13:29.006 "tgt_name": "foobar", 00:13:29.006 "method": "nvmf_create_subsystem", 00:13:29.006 "req_id": 1 00:13:29.006 } 00:13:29.006 Got JSON-RPC error response 00:13:29.006 response: 00:13:29.006 { 00:13:29.006 "code": -32603, 00:13:29.006 "message": "Unable to find target foobar" 00:13:29.006 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:29.006 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:29.006 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8192 00:13:29.007 [2024-11-26 19:03:46.183954] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8192: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:29.269 { 00:13:29.269 "nqn": "nqn.2016-06.io.spdk:cnode8192", 00:13:29.269 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.269 "method": "nvmf_create_subsystem", 00:13:29.269 "req_id": 1 00:13:29.269 } 00:13:29.269 Got JSON-RPC error response 00:13:29.269 response: 00:13:29.269 { 00:13:29.269 "code": -32602, 00:13:29.269 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.269 }' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:29.269 { 00:13:29.269 "nqn": "nqn.2016-06.io.spdk:cnode8192", 00:13:29.269 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.269 "method": "nvmf_create_subsystem", 00:13:29.269 "req_id": 1 00:13:29.269 } 00:13:29.269 Got JSON-RPC error response 00:13:29.269 response: 00:13:29.269 { 00:13:29.269 "code": -32602, 00:13:29.269 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.269 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1496 00:13:29.269 [2024-11-26 19:03:46.392722] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1496: invalid model number 'SPDK_Controller' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:29.269 { 00:13:29.269 "nqn": "nqn.2016-06.io.spdk:cnode1496", 00:13:29.269 "model_number": "SPDK_Controller\u001f", 00:13:29.269 "method": "nvmf_create_subsystem", 00:13:29.269 "req_id": 1 00:13:29.269 } 00:13:29.269 Got JSON-RPC error response 00:13:29.269 response: 00:13:29.269 { 00:13:29.269 "code": -32602, 00:13:29.269 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.269 }' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:29.269 { 00:13:29.269 "nqn": "nqn.2016-06.io.spdk:cnode1496", 00:13:29.269 "model_number": "SPDK_Controller\u001f", 00:13:29.269 "method": "nvmf_create_subsystem", 00:13:29.269 "req_id": 1 00:13:29.269 } 00:13:29.269 Got JSON-RPC error response 00:13:29.269 response: 00:13:29.269 { 00:13:29.269 "code": -32602, 00:13:29.269 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.269 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.269 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:29.531 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:13:29.532 19:03:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?SS9BCBN!uxf /dev/null' 00:13:32.156 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.711 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.711 00:13:34.711 real 0m14.265s 00:13:34.711 user 0m21.330s 00:13:34.711 sys 0m6.804s 00:13:34.711 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.712 ************************************ 00:13:34.712 END TEST nvmf_invalid 00:13:34.712 ************************************ 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.712 ************************************ 00:13:34.712 START TEST nvmf_connect_stress 00:13:34.712 ************************************ 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.712 * Looking for test storage... 00:13:34.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.712 --rc genhtml_branch_coverage=1 00:13:34.712 --rc genhtml_function_coverage=1 00:13:34.712 --rc genhtml_legend=1 00:13:34.712 --rc geninfo_all_blocks=1 00:13:34.712 --rc geninfo_unexecuted_blocks=1 00:13:34.712 00:13:34.712 ' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.712 --rc genhtml_branch_coverage=1 00:13:34.712 --rc genhtml_function_coverage=1 00:13:34.712 --rc genhtml_legend=1 00:13:34.712 --rc geninfo_all_blocks=1 00:13:34.712 --rc geninfo_unexecuted_blocks=1 00:13:34.712 00:13:34.712 ' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.712 --rc genhtml_branch_coverage=1 00:13:34.712 --rc genhtml_function_coverage=1 00:13:34.712 --rc genhtml_legend=1 00:13:34.712 --rc geninfo_all_blocks=1 00:13:34.712 --rc geninfo_unexecuted_blocks=1 00:13:34.712 00:13:34.712 ' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.712 --rc genhtml_branch_coverage=1 00:13:34.712 --rc genhtml_function_coverage=1 00:13:34.712 --rc genhtml_legend=1 00:13:34.712 --rc geninfo_all_blocks=1 00:13:34.712 --rc geninfo_unexecuted_blocks=1 00:13:34.712 00:13:34.712 ' 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:34.712 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:34.713 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.857 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:42.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:42.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:42.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:42.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.858 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:13:42.858 00:13:42.858 --- 10.0.0.2 ping statistics --- 00:13:42.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.858 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:13:42.858 00:13:42.858 --- 10.0.0.1 ping statistics --- 00:13:42.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.858 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2869363 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2869363 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2869363 ']' 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.858 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.859 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.859 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.859 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.859 [2024-11-26 19:03:59.254471] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:13:42.859 [2024-11-26 19:03:59.254539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.859 [2024-11-26 19:03:59.353755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.859 [2024-11-26 19:03:59.405796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.859 [2024-11-26 19:03:59.405847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.859 [2024-11-26 19:03:59.405856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.859 [2024-11-26 19:03:59.405869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.859 [2024-11-26 19:03:59.405875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.859 [2024-11-26 19:03:59.407758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.859 [2024-11-26 19:03:59.407920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.859 [2024-11-26 19:03:59.407920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.120 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.120 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 [2024-11-26 19:04:00.132945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 [2024-11-26 19:04:00.161329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.121 NULL1 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2869645 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.121 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.693 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.693 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:43.693 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.693 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.693 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.954 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.954 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:43.954 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.954 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.954 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.215 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.215 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:44.215 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.215 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.215 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.476 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.477 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:44.477 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.477 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.477 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.738 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.738 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:44.738 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.738 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.738 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.309 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.310 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:45.310 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.310 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.310 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.571 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.571 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:45.571 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.571 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.571 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.833 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.833 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:45.833 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.833 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.833 19:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.094 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.094 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:46.094 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.094 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.094 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.355 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.355 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:46.355 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.355 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.355 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.928 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.928 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:46.928 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.928 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.928 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.188 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.188 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:47.188 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.188 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.188 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.449 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.449 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:47.449 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.449 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.449 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.710 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.710 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:47.710 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.710 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.710 19:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.970 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.970 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:47.970 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.971 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.971 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.543 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.543 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:48.543 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.543 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.543 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.805 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.805 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:48.805 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.805 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.805 19:04:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.065 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.065 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:49.065 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.065 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.065 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.326 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.326 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:49.326 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.326 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.326 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.897 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.897 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:49.897 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.897 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.897 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.158 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.158 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:50.158 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.158 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.158 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.450 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.450 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:50.450 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.450 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.450 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.752 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.752 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:50.752 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.752 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.752 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.036 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.036 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:51.036 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.036 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.036 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.317 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.317 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:51.317 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.317 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.317 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.578 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:51.578 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.578 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.578 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.148 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.148 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:52.148 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.148 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.148 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.408 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.408 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:52.408 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.408 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.408 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.670 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.670 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:52.670 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.670 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.670 19:04:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.931 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.931 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:52.931 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.931 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.931 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.192 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.192 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:53.192 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.192 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.192 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.193 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2869645 00:13:53.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2869645) - No such process 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2869645 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.764 rmmod nvme_tcp 00:13:53.764 rmmod nvme_fabrics 00:13:53.764 rmmod nvme_keyring 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2869363 ']' 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2869363 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2869363 ']' 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2869363 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2869363 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2869363' 00:13:53.764 killing process with pid 2869363 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2869363 00:13:53.764 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2869363 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.024 19:04:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.945 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:55.945 00:13:55.945 real 0m21.645s 00:13:55.945 user 0m43.120s 00:13:55.945 sys 0m9.547s 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.946 ************************************ 00:13:55.946 END TEST nvmf_connect_stress 00:13:55.946 ************************************ 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.946 ************************************ 00:13:55.946 START TEST nvmf_fused_ordering 00:13:55.946 ************************************ 00:13:55.946 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:56.221 * Looking for test storage... 00:13:56.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.221 --rc genhtml_branch_coverage=1 00:13:56.221 --rc genhtml_function_coverage=1 00:13:56.221 --rc genhtml_legend=1 00:13:56.221 --rc geninfo_all_blocks=1 00:13:56.221 --rc geninfo_unexecuted_blocks=1 00:13:56.221 00:13:56.221 ' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.221 --rc genhtml_branch_coverage=1 00:13:56.221 --rc genhtml_function_coverage=1 00:13:56.221 --rc genhtml_legend=1 00:13:56.221 --rc geninfo_all_blocks=1 00:13:56.221 --rc geninfo_unexecuted_blocks=1 00:13:56.221 00:13:56.221 ' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.221 --rc genhtml_branch_coverage=1 00:13:56.221 --rc genhtml_function_coverage=1 00:13:56.221 --rc genhtml_legend=1 00:13:56.221 --rc geninfo_all_blocks=1 00:13:56.221 --rc geninfo_unexecuted_blocks=1 00:13:56.221 00:13:56.221 ' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.221 --rc genhtml_branch_coverage=1 00:13:56.221 --rc genhtml_function_coverage=1 00:13:56.221 --rc genhtml_legend=1 00:13:56.221 --rc geninfo_all_blocks=1 00:13:56.221 --rc geninfo_unexecuted_blocks=1 00:13:56.221 00:13:56.221 ' 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.221 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.222 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.396 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:04.397 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:04.397 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:04.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:04.397 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.397 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:04.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:14:04.398 00:14:04.398 --- 10.0.0.2 ping statistics --- 00:14:04.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.398 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:14:04.398 00:14:04.398 --- 10.0.0.1 ping statistics --- 00:14:04.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.398 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2876009 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2876009 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2876009 ']' 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.398 19:04:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.398 [2024-11-26 19:04:20.961868] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:04.398 [2024-11-26 19:04:20.961934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.398 [2024-11-26 19:04:21.061913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.398 [2024-11-26 19:04:21.112120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.398 [2024-11-26 19:04:21.112183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.398 [2024-11-26 19:04:21.112193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.398 [2024-11-26 19:04:21.112200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.398 [2024-11-26 19:04:21.112207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.398 [2024-11-26 19:04:21.112958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 [2024-11-26 19:04:21.843624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.660 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 [2024-11-26 19:04:21.867940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.922 NULL1 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.922 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.923 19:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:04.923 [2024-11-26 19:04:21.938260] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:04.923 [2024-11-26 19:04:21.938303] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876043 ] 00:14:05.495 Attached to nqn.2016-06.io.spdk:cnode1 00:14:05.495 Namespace ID: 1 size: 1GB 00:14:05.495 fused_ordering(0) 00:14:05.495 fused_ordering(1) 00:14:05.495 fused_ordering(2) 00:14:05.495 fused_ordering(3) 00:14:05.495 fused_ordering(4) 00:14:05.495 fused_ordering(5) 00:14:05.495 fused_ordering(6) 00:14:05.495 fused_ordering(7) 00:14:05.495 fused_ordering(8) 00:14:05.495 fused_ordering(9) 00:14:05.495 fused_ordering(10) 00:14:05.495 fused_ordering(11) 00:14:05.495 fused_ordering(12) 00:14:05.495 fused_ordering(13) 00:14:05.495 fused_ordering(14) 00:14:05.495 fused_ordering(15) 00:14:05.495 fused_ordering(16) 00:14:05.495 fused_ordering(17) 00:14:05.495 fused_ordering(18) 00:14:05.495 fused_ordering(19) 00:14:05.495 fused_ordering(20) 00:14:05.495 fused_ordering(21) 00:14:05.495 fused_ordering(22) 00:14:05.495 fused_ordering(23) 00:14:05.495 fused_ordering(24) 00:14:05.495 fused_ordering(25) 00:14:05.495 fused_ordering(26) 00:14:05.495 fused_ordering(27) 00:14:05.495 fused_ordering(28) 00:14:05.495 fused_ordering(29) 00:14:05.495 fused_ordering(30) 00:14:05.495 fused_ordering(31) 00:14:05.495 fused_ordering(32) 00:14:05.495 fused_ordering(33) 00:14:05.495 fused_ordering(34) 00:14:05.495 fused_ordering(35) 00:14:05.495 fused_ordering(36) 00:14:05.495 fused_ordering(37) 00:14:05.495 fused_ordering(38) 00:14:05.495 fused_ordering(39) 00:14:05.495 fused_ordering(40) 00:14:05.495 fused_ordering(41) 00:14:05.495 fused_ordering(42) 00:14:05.495 fused_ordering(43) 00:14:05.495 fused_ordering(44) 00:14:05.495 fused_ordering(45) 00:14:05.495 fused_ordering(46) 00:14:05.495 fused_ordering(47) 00:14:05.495 fused_ordering(48) 00:14:05.495 fused_ordering(49) 00:14:05.495 fused_ordering(50) 00:14:05.495 fused_ordering(51) 00:14:05.495 fused_ordering(52) 00:14:05.495 fused_ordering(53) 00:14:05.495 fused_ordering(54) 00:14:05.495 fused_ordering(55) 00:14:05.495 fused_ordering(56) 00:14:05.495 fused_ordering(57) 00:14:05.495 fused_ordering(58) 00:14:05.495 fused_ordering(59) 00:14:05.495 fused_ordering(60) 00:14:05.495 fused_ordering(61) 00:14:05.495 fused_ordering(62) 00:14:05.495 fused_ordering(63) 00:14:05.495 fused_ordering(64) 00:14:05.495 fused_ordering(65) 00:14:05.495 fused_ordering(66) 00:14:05.495 fused_ordering(67) 00:14:05.495 fused_ordering(68) 00:14:05.495 fused_ordering(69) 00:14:05.495 fused_ordering(70) 00:14:05.495 fused_ordering(71) 00:14:05.495 fused_ordering(72) 00:14:05.495 fused_ordering(73) 00:14:05.495 fused_ordering(74) 00:14:05.495 fused_ordering(75) 00:14:05.495 fused_ordering(76) 00:14:05.495 fused_ordering(77) 00:14:05.495 fused_ordering(78) 00:14:05.495 fused_ordering(79) 00:14:05.495 fused_ordering(80) 00:14:05.495 fused_ordering(81) 00:14:05.495 fused_ordering(82) 00:14:05.495 fused_ordering(83) 00:14:05.495 fused_ordering(84) 00:14:05.495 fused_ordering(85) 00:14:05.495 fused_ordering(86) 00:14:05.495 fused_ordering(87) 00:14:05.495 fused_ordering(88) 00:14:05.495 fused_ordering(89) 00:14:05.495 fused_ordering(90) 00:14:05.495 fused_ordering(91) 00:14:05.495 fused_ordering(92) 00:14:05.495 fused_ordering(93) 00:14:05.495 fused_ordering(94) 00:14:05.495 fused_ordering(95) 00:14:05.495 fused_ordering(96) 00:14:05.495 fused_ordering(97) 00:14:05.495 fused_ordering(98) 00:14:05.495 fused_ordering(99) 00:14:05.495 fused_ordering(100) 00:14:05.495 fused_ordering(101) 00:14:05.495 fused_ordering(102) 00:14:05.495 fused_ordering(103) 00:14:05.495 fused_ordering(104) 00:14:05.495 fused_ordering(105) 00:14:05.495 fused_ordering(106) 00:14:05.495 fused_ordering(107) 00:14:05.495 fused_ordering(108) 00:14:05.495 fused_ordering(109) 00:14:05.495 fused_ordering(110) 00:14:05.495 fused_ordering(111) 00:14:05.495 fused_ordering(112) 00:14:05.495 fused_ordering(113) 00:14:05.495 fused_ordering(114) 00:14:05.495 fused_ordering(115) 00:14:05.495 fused_ordering(116) 00:14:05.495 fused_ordering(117) 00:14:05.495 fused_ordering(118) 00:14:05.495 fused_ordering(119) 00:14:05.495 fused_ordering(120) 00:14:05.495 fused_ordering(121) 00:14:05.495 fused_ordering(122) 00:14:05.495 fused_ordering(123) 00:14:05.495 fused_ordering(124) 00:14:05.495 fused_ordering(125) 00:14:05.495 fused_ordering(126) 00:14:05.495 fused_ordering(127) 00:14:05.495 fused_ordering(128) 00:14:05.495 fused_ordering(129) 00:14:05.495 fused_ordering(130) 00:14:05.495 fused_ordering(131) 00:14:05.495 fused_ordering(132) 00:14:05.495 fused_ordering(133) 00:14:05.495 fused_ordering(134) 00:14:05.495 fused_ordering(135) 00:14:05.495 fused_ordering(136) 00:14:05.495 fused_ordering(137) 00:14:05.495 fused_ordering(138) 00:14:05.495 fused_ordering(139) 00:14:05.495 fused_ordering(140) 00:14:05.495 fused_ordering(141) 00:14:05.495 fused_ordering(142) 00:14:05.495 fused_ordering(143) 00:14:05.495 fused_ordering(144) 00:14:05.495 fused_ordering(145) 00:14:05.495 fused_ordering(146) 00:14:05.495 fused_ordering(147) 00:14:05.495 fused_ordering(148) 00:14:05.495 fused_ordering(149) 00:14:05.495 fused_ordering(150) 00:14:05.495 fused_ordering(151) 00:14:05.495 fused_ordering(152) 00:14:05.495 fused_ordering(153) 00:14:05.495 fused_ordering(154) 00:14:05.495 fused_ordering(155) 00:14:05.495 fused_ordering(156) 00:14:05.495 fused_ordering(157) 00:14:05.495 fused_ordering(158) 00:14:05.495 fused_ordering(159) 00:14:05.495 fused_ordering(160) 00:14:05.495 fused_ordering(161) 00:14:05.495 fused_ordering(162) 00:14:05.495 fused_ordering(163) 00:14:05.495 fused_ordering(164) 00:14:05.495 fused_ordering(165) 00:14:05.495 fused_ordering(166) 00:14:05.495 fused_ordering(167) 00:14:05.495 fused_ordering(168) 00:14:05.495 fused_ordering(169) 00:14:05.495 fused_ordering(170) 00:14:05.495 fused_ordering(171) 00:14:05.495 fused_ordering(172) 00:14:05.495 fused_ordering(173) 00:14:05.495 fused_ordering(174) 00:14:05.495 fused_ordering(175) 00:14:05.495 fused_ordering(176) 00:14:05.495 fused_ordering(177) 00:14:05.495 fused_ordering(178) 00:14:05.495 fused_ordering(179) 00:14:05.495 fused_ordering(180) 00:14:05.495 fused_ordering(181) 00:14:05.495 fused_ordering(182) 00:14:05.495 fused_ordering(183) 00:14:05.495 fused_ordering(184) 00:14:05.495 fused_ordering(185) 00:14:05.495 fused_ordering(186) 00:14:05.495 fused_ordering(187) 00:14:05.495 fused_ordering(188) 00:14:05.495 fused_ordering(189) 00:14:05.495 fused_ordering(190) 00:14:05.495 fused_ordering(191) 00:14:05.495 fused_ordering(192) 00:14:05.495 fused_ordering(193) 00:14:05.495 fused_ordering(194) 00:14:05.495 fused_ordering(195) 00:14:05.495 fused_ordering(196) 00:14:05.495 fused_ordering(197) 00:14:05.495 fused_ordering(198) 00:14:05.495 fused_ordering(199) 00:14:05.495 fused_ordering(200) 00:14:05.495 fused_ordering(201) 00:14:05.495 fused_ordering(202) 00:14:05.495 fused_ordering(203) 00:14:05.495 fused_ordering(204) 00:14:05.495 fused_ordering(205) 00:14:05.756 fused_ordering(206) 00:14:05.756 fused_ordering(207) 00:14:05.756 fused_ordering(208) 00:14:05.756 fused_ordering(209) 00:14:05.756 fused_ordering(210) 00:14:05.756 fused_ordering(211) 00:14:05.756 fused_ordering(212) 00:14:05.756 fused_ordering(213) 00:14:05.756 fused_ordering(214) 00:14:05.756 fused_ordering(215) 00:14:05.756 fused_ordering(216) 00:14:05.756 fused_ordering(217) 00:14:05.756 fused_ordering(218) 00:14:05.756 fused_ordering(219) 00:14:05.756 fused_ordering(220) 00:14:05.756 fused_ordering(221) 00:14:05.756 fused_ordering(222) 00:14:05.756 fused_ordering(223) 00:14:05.756 fused_ordering(224) 00:14:05.756 fused_ordering(225) 00:14:05.756 fused_ordering(226) 00:14:05.756 fused_ordering(227) 00:14:05.756 fused_ordering(228) 00:14:05.756 fused_ordering(229) 00:14:05.756 fused_ordering(230) 00:14:05.756 fused_ordering(231) 00:14:05.756 fused_ordering(232) 00:14:05.756 fused_ordering(233) 00:14:05.756 fused_ordering(234) 00:14:05.756 fused_ordering(235) 00:14:05.756 fused_ordering(236) 00:14:05.756 fused_ordering(237) 00:14:05.756 fused_ordering(238) 00:14:05.756 fused_ordering(239) 00:14:05.756 fused_ordering(240) 00:14:05.756 fused_ordering(241) 00:14:05.756 fused_ordering(242) 00:14:05.756 fused_ordering(243) 00:14:05.756 fused_ordering(244) 00:14:05.756 fused_ordering(245) 00:14:05.756 fused_ordering(246) 00:14:05.756 fused_ordering(247) 00:14:05.756 fused_ordering(248) 00:14:05.756 fused_ordering(249) 00:14:05.756 fused_ordering(250) 00:14:05.756 fused_ordering(251) 00:14:05.756 fused_ordering(252) 00:14:05.756 fused_ordering(253) 00:14:05.756 fused_ordering(254) 00:14:05.756 fused_ordering(255) 00:14:05.756 fused_ordering(256) 00:14:05.756 fused_ordering(257) 00:14:05.756 fused_ordering(258) 00:14:05.756 fused_ordering(259) 00:14:05.756 fused_ordering(260) 00:14:05.756 fused_ordering(261) 00:14:05.756 fused_ordering(262) 00:14:05.756 fused_ordering(263) 00:14:05.756 fused_ordering(264) 00:14:05.756 fused_ordering(265) 00:14:05.756 fused_ordering(266) 00:14:05.756 fused_ordering(267) 00:14:05.756 fused_ordering(268) 00:14:05.756 fused_ordering(269) 00:14:05.756 fused_ordering(270) 00:14:05.756 fused_ordering(271) 00:14:05.756 fused_ordering(272) 00:14:05.756 fused_ordering(273) 00:14:05.756 fused_ordering(274) 00:14:05.756 fused_ordering(275) 00:14:05.756 fused_ordering(276) 00:14:05.756 fused_ordering(277) 00:14:05.756 fused_ordering(278) 00:14:05.756 fused_ordering(279) 00:14:05.756 fused_ordering(280) 00:14:05.756 fused_ordering(281) 00:14:05.756 fused_ordering(282) 00:14:05.756 fused_ordering(283) 00:14:05.756 fused_ordering(284) 00:14:05.756 fused_ordering(285) 00:14:05.756 fused_ordering(286) 00:14:05.756 fused_ordering(287) 00:14:05.756 fused_ordering(288) 00:14:05.756 fused_ordering(289) 00:14:05.756 fused_ordering(290) 00:14:05.756 fused_ordering(291) 00:14:05.756 fused_ordering(292) 00:14:05.756 fused_ordering(293) 00:14:05.756 fused_ordering(294) 00:14:05.756 fused_ordering(295) 00:14:05.756 fused_ordering(296) 00:14:05.756 fused_ordering(297) 00:14:05.756 fused_ordering(298) 00:14:05.756 fused_ordering(299) 00:14:05.756 fused_ordering(300) 00:14:05.756 fused_ordering(301) 00:14:05.756 fused_ordering(302) 00:14:05.756 fused_ordering(303) 00:14:05.756 fused_ordering(304) 00:14:05.756 fused_ordering(305) 00:14:05.756 fused_ordering(306) 00:14:05.756 fused_ordering(307) 00:14:05.756 fused_ordering(308) 00:14:05.756 fused_ordering(309) 00:14:05.756 fused_ordering(310) 00:14:05.756 fused_ordering(311) 00:14:05.756 fused_ordering(312) 00:14:05.756 fused_ordering(313) 00:14:05.756 fused_ordering(314) 00:14:05.756 fused_ordering(315) 00:14:05.756 fused_ordering(316) 00:14:05.756 fused_ordering(317) 00:14:05.756 fused_ordering(318) 00:14:05.756 fused_ordering(319) 00:14:05.756 fused_ordering(320) 00:14:05.756 fused_ordering(321) 00:14:05.756 fused_ordering(322) 00:14:05.756 fused_ordering(323) 00:14:05.756 fused_ordering(324) 00:14:05.756 fused_ordering(325) 00:14:05.756 fused_ordering(326) 00:14:05.756 fused_ordering(327) 00:14:05.756 fused_ordering(328) 00:14:05.756 fused_ordering(329) 00:14:05.756 fused_ordering(330) 00:14:05.756 fused_ordering(331) 00:14:05.756 fused_ordering(332) 00:14:05.756 fused_ordering(333) 00:14:05.756 fused_ordering(334) 00:14:05.756 fused_ordering(335) 00:14:05.756 fused_ordering(336) 00:14:05.756 fused_ordering(337) 00:14:05.756 fused_ordering(338) 00:14:05.756 fused_ordering(339) 00:14:05.756 fused_ordering(340) 00:14:05.756 fused_ordering(341) 00:14:05.756 fused_ordering(342) 00:14:05.756 fused_ordering(343) 00:14:05.756 fused_ordering(344) 00:14:05.756 fused_ordering(345) 00:14:05.756 fused_ordering(346) 00:14:05.756 fused_ordering(347) 00:14:05.756 fused_ordering(348) 00:14:05.756 fused_ordering(349) 00:14:05.756 fused_ordering(350) 00:14:05.756 fused_ordering(351) 00:14:05.756 fused_ordering(352) 00:14:05.756 fused_ordering(353) 00:14:05.756 fused_ordering(354) 00:14:05.756 fused_ordering(355) 00:14:05.756 fused_ordering(356) 00:14:05.756 fused_ordering(357) 00:14:05.756 fused_ordering(358) 00:14:05.756 fused_ordering(359) 00:14:05.756 fused_ordering(360) 00:14:05.756 fused_ordering(361) 00:14:05.756 fused_ordering(362) 00:14:05.756 fused_ordering(363) 00:14:05.756 fused_ordering(364) 00:14:05.756 fused_ordering(365) 00:14:05.756 fused_ordering(366) 00:14:05.756 fused_ordering(367) 00:14:05.756 fused_ordering(368) 00:14:05.756 fused_ordering(369) 00:14:05.756 fused_ordering(370) 00:14:05.756 fused_ordering(371) 00:14:05.756 fused_ordering(372) 00:14:05.756 fused_ordering(373) 00:14:05.756 fused_ordering(374) 00:14:05.756 fused_ordering(375) 00:14:05.756 fused_ordering(376) 00:14:05.756 fused_ordering(377) 00:14:05.756 fused_ordering(378) 00:14:05.756 fused_ordering(379) 00:14:05.756 fused_ordering(380) 00:14:05.756 fused_ordering(381) 00:14:05.756 fused_ordering(382) 00:14:05.756 fused_ordering(383) 00:14:05.756 fused_ordering(384) 00:14:05.756 fused_ordering(385) 00:14:05.756 fused_ordering(386) 00:14:05.756 fused_ordering(387) 00:14:05.756 fused_ordering(388) 00:14:05.756 fused_ordering(389) 00:14:05.756 fused_ordering(390) 00:14:05.756 fused_ordering(391) 00:14:05.756 fused_ordering(392) 00:14:05.756 fused_ordering(393) 00:14:05.756 fused_ordering(394) 00:14:05.756 fused_ordering(395) 00:14:05.756 fused_ordering(396) 00:14:05.756 fused_ordering(397) 00:14:05.756 fused_ordering(398) 00:14:05.756 fused_ordering(399) 00:14:05.757 fused_ordering(400) 00:14:05.757 fused_ordering(401) 00:14:05.757 fused_ordering(402) 00:14:05.757 fused_ordering(403) 00:14:05.757 fused_ordering(404) 00:14:05.757 fused_ordering(405) 00:14:05.757 fused_ordering(406) 00:14:05.757 fused_ordering(407) 00:14:05.757 fused_ordering(408) 00:14:05.757 fused_ordering(409) 00:14:05.757 fused_ordering(410) 00:14:06.018 fused_ordering(411) 00:14:06.018 fused_ordering(412) 00:14:06.018 fused_ordering(413) 00:14:06.018 fused_ordering(414) 00:14:06.018 fused_ordering(415) 00:14:06.018 fused_ordering(416) 00:14:06.018 fused_ordering(417) 00:14:06.018 fused_ordering(418) 00:14:06.018 fused_ordering(419) 00:14:06.018 fused_ordering(420) 00:14:06.018 fused_ordering(421) 00:14:06.018 fused_ordering(422) 00:14:06.018 fused_ordering(423) 00:14:06.018 fused_ordering(424) 00:14:06.018 fused_ordering(425) 00:14:06.018 fused_ordering(426) 00:14:06.018 fused_ordering(427) 00:14:06.018 fused_ordering(428) 00:14:06.018 fused_ordering(429) 00:14:06.018 fused_ordering(430) 00:14:06.018 fused_ordering(431) 00:14:06.018 fused_ordering(432) 00:14:06.018 fused_ordering(433) 00:14:06.018 fused_ordering(434) 00:14:06.018 fused_ordering(435) 00:14:06.018 fused_ordering(436) 00:14:06.018 fused_ordering(437) 00:14:06.018 fused_ordering(438) 00:14:06.018 fused_ordering(439) 00:14:06.018 fused_ordering(440) 00:14:06.018 fused_ordering(441) 00:14:06.018 fused_ordering(442) 00:14:06.018 fused_ordering(443) 00:14:06.018 fused_ordering(444) 00:14:06.018 fused_ordering(445) 00:14:06.018 fused_ordering(446) 00:14:06.018 fused_ordering(447) 00:14:06.018 fused_ordering(448) 00:14:06.018 fused_ordering(449) 00:14:06.018 fused_ordering(450) 00:14:06.018 fused_ordering(451) 00:14:06.018 fused_ordering(452) 00:14:06.018 fused_ordering(453) 00:14:06.018 fused_ordering(454) 00:14:06.018 fused_ordering(455) 00:14:06.018 fused_ordering(456) 00:14:06.018 fused_ordering(457) 00:14:06.018 fused_ordering(458) 00:14:06.018 fused_ordering(459) 00:14:06.018 fused_ordering(460) 00:14:06.018 fused_ordering(461) 00:14:06.018 fused_ordering(462) 00:14:06.018 fused_ordering(463) 00:14:06.018 fused_ordering(464) 00:14:06.018 fused_ordering(465) 00:14:06.018 fused_ordering(466) 00:14:06.018 fused_ordering(467) 00:14:06.018 fused_ordering(468) 00:14:06.018 fused_ordering(469) 00:14:06.018 fused_ordering(470) 00:14:06.018 fused_ordering(471) 00:14:06.018 fused_ordering(472) 00:14:06.018 fused_ordering(473) 00:14:06.018 fused_ordering(474) 00:14:06.018 fused_ordering(475) 00:14:06.018 fused_ordering(476) 00:14:06.018 fused_ordering(477) 00:14:06.018 fused_ordering(478) 00:14:06.018 fused_ordering(479) 00:14:06.018 fused_ordering(480) 00:14:06.018 fused_ordering(481) 00:14:06.019 fused_ordering(482) 00:14:06.019 fused_ordering(483) 00:14:06.019 fused_ordering(484) 00:14:06.019 fused_ordering(485) 00:14:06.019 fused_ordering(486) 00:14:06.019 fused_ordering(487) 00:14:06.019 fused_ordering(488) 00:14:06.019 fused_ordering(489) 00:14:06.019 fused_ordering(490) 00:14:06.019 fused_ordering(491) 00:14:06.019 fused_ordering(492) 00:14:06.019 fused_ordering(493) 00:14:06.019 fused_ordering(494) 00:14:06.019 fused_ordering(495) 00:14:06.019 fused_ordering(496) 00:14:06.019 fused_ordering(497) 00:14:06.019 fused_ordering(498) 00:14:06.019 fused_ordering(499) 00:14:06.019 fused_ordering(500) 00:14:06.019 fused_ordering(501) 00:14:06.019 fused_ordering(502) 00:14:06.019 fused_ordering(503) 00:14:06.019 fused_ordering(504) 00:14:06.019 fused_ordering(505) 00:14:06.019 fused_ordering(506) 00:14:06.019 fused_ordering(507) 00:14:06.019 fused_ordering(508) 00:14:06.019 fused_ordering(509) 00:14:06.019 fused_ordering(510) 00:14:06.019 fused_ordering(511) 00:14:06.019 fused_ordering(512) 00:14:06.019 fused_ordering(513) 00:14:06.019 fused_ordering(514) 00:14:06.019 fused_ordering(515) 00:14:06.019 fused_ordering(516) 00:14:06.019 fused_ordering(517) 00:14:06.019 fused_ordering(518) 00:14:06.019 fused_ordering(519) 00:14:06.019 fused_ordering(520) 00:14:06.019 fused_ordering(521) 00:14:06.019 fused_ordering(522) 00:14:06.019 fused_ordering(523) 00:14:06.019 fused_ordering(524) 00:14:06.019 fused_ordering(525) 00:14:06.019 fused_ordering(526) 00:14:06.019 fused_ordering(527) 00:14:06.019 fused_ordering(528) 00:14:06.019 fused_ordering(529) 00:14:06.019 fused_ordering(530) 00:14:06.019 fused_ordering(531) 00:14:06.019 fused_ordering(532) 00:14:06.019 fused_ordering(533) 00:14:06.019 fused_ordering(534) 00:14:06.019 fused_ordering(535) 00:14:06.019 fused_ordering(536) 00:14:06.019 fused_ordering(537) 00:14:06.019 fused_ordering(538) 00:14:06.019 fused_ordering(539) 00:14:06.019 fused_ordering(540) 00:14:06.019 fused_ordering(541) 00:14:06.019 fused_ordering(542) 00:14:06.019 fused_ordering(543) 00:14:06.019 fused_ordering(544) 00:14:06.019 fused_ordering(545) 00:14:06.019 fused_ordering(546) 00:14:06.019 fused_ordering(547) 00:14:06.019 fused_ordering(548) 00:14:06.019 fused_ordering(549) 00:14:06.019 fused_ordering(550) 00:14:06.019 fused_ordering(551) 00:14:06.019 fused_ordering(552) 00:14:06.019 fused_ordering(553) 00:14:06.019 fused_ordering(554) 00:14:06.019 fused_ordering(555) 00:14:06.019 fused_ordering(556) 00:14:06.019 fused_ordering(557) 00:14:06.019 fused_ordering(558) 00:14:06.019 fused_ordering(559) 00:14:06.019 fused_ordering(560) 00:14:06.019 fused_ordering(561) 00:14:06.019 fused_ordering(562) 00:14:06.019 fused_ordering(563) 00:14:06.019 fused_ordering(564) 00:14:06.019 fused_ordering(565) 00:14:06.019 fused_ordering(566) 00:14:06.019 fused_ordering(567) 00:14:06.019 fused_ordering(568) 00:14:06.019 fused_ordering(569) 00:14:06.019 fused_ordering(570) 00:14:06.019 fused_ordering(571) 00:14:06.019 fused_ordering(572) 00:14:06.019 fused_ordering(573) 00:14:06.019 fused_ordering(574) 00:14:06.019 fused_ordering(575) 00:14:06.019 fused_ordering(576) 00:14:06.019 fused_ordering(577) 00:14:06.019 fused_ordering(578) 00:14:06.019 fused_ordering(579) 00:14:06.019 fused_ordering(580) 00:14:06.019 fused_ordering(581) 00:14:06.019 fused_ordering(582) 00:14:06.019 fused_ordering(583) 00:14:06.019 fused_ordering(584) 00:14:06.019 fused_ordering(585) 00:14:06.019 fused_ordering(586) 00:14:06.019 fused_ordering(587) 00:14:06.019 fused_ordering(588) 00:14:06.019 fused_ordering(589) 00:14:06.019 fused_ordering(590) 00:14:06.019 fused_ordering(591) 00:14:06.019 fused_ordering(592) 00:14:06.019 fused_ordering(593) 00:14:06.019 fused_ordering(594) 00:14:06.019 fused_ordering(595) 00:14:06.019 fused_ordering(596) 00:14:06.019 fused_ordering(597) 00:14:06.019 fused_ordering(598) 00:14:06.019 fused_ordering(599) 00:14:06.019 fused_ordering(600) 00:14:06.019 fused_ordering(601) 00:14:06.019 fused_ordering(602) 00:14:06.019 fused_ordering(603) 00:14:06.019 fused_ordering(604) 00:14:06.019 fused_ordering(605) 00:14:06.019 fused_ordering(606) 00:14:06.019 fused_ordering(607) 00:14:06.019 fused_ordering(608) 00:14:06.019 fused_ordering(609) 00:14:06.019 fused_ordering(610) 00:14:06.019 fused_ordering(611) 00:14:06.019 fused_ordering(612) 00:14:06.019 fused_ordering(613) 00:14:06.019 fused_ordering(614) 00:14:06.019 fused_ordering(615) 00:14:06.592 fused_ordering(616) 00:14:06.592 fused_ordering(617) 00:14:06.592 fused_ordering(618) 00:14:06.592 fused_ordering(619) 00:14:06.592 fused_ordering(620) 00:14:06.592 fused_ordering(621) 00:14:06.592 fused_ordering(622) 00:14:06.592 fused_ordering(623) 00:14:06.592 fused_ordering(624) 00:14:06.592 fused_ordering(625) 00:14:06.592 fused_ordering(626) 00:14:06.592 fused_ordering(627) 00:14:06.592 fused_ordering(628) 00:14:06.592 fused_ordering(629) 00:14:06.592 fused_ordering(630) 00:14:06.592 fused_ordering(631) 00:14:06.592 fused_ordering(632) 00:14:06.592 fused_ordering(633) 00:14:06.592 fused_ordering(634) 00:14:06.592 fused_ordering(635) 00:14:06.592 fused_ordering(636) 00:14:06.592 fused_ordering(637) 00:14:06.592 fused_ordering(638) 00:14:06.592 fused_ordering(639) 00:14:06.592 fused_ordering(640) 00:14:06.592 fused_ordering(641) 00:14:06.592 fused_ordering(642) 00:14:06.592 fused_ordering(643) 00:14:06.592 fused_ordering(644) 00:14:06.592 fused_ordering(645) 00:14:06.592 fused_ordering(646) 00:14:06.592 fused_ordering(647) 00:14:06.592 fused_ordering(648) 00:14:06.592 fused_ordering(649) 00:14:06.592 fused_ordering(650) 00:14:06.592 fused_ordering(651) 00:14:06.592 fused_ordering(652) 00:14:06.592 fused_ordering(653) 00:14:06.592 fused_ordering(654) 00:14:06.592 fused_ordering(655) 00:14:06.592 fused_ordering(656) 00:14:06.592 fused_ordering(657) 00:14:06.592 fused_ordering(658) 00:14:06.592 fused_ordering(659) 00:14:06.592 fused_ordering(660) 00:14:06.592 fused_ordering(661) 00:14:06.592 fused_ordering(662) 00:14:06.592 fused_ordering(663) 00:14:06.592 fused_ordering(664) 00:14:06.592 fused_ordering(665) 00:14:06.592 fused_ordering(666) 00:14:06.592 fused_ordering(667) 00:14:06.592 fused_ordering(668) 00:14:06.592 fused_ordering(669) 00:14:06.592 fused_ordering(670) 00:14:06.592 fused_ordering(671) 00:14:06.592 fused_ordering(672) 00:14:06.592 fused_ordering(673) 00:14:06.592 fused_ordering(674) 00:14:06.592 fused_ordering(675) 00:14:06.592 fused_ordering(676) 00:14:06.592 fused_ordering(677) 00:14:06.592 fused_ordering(678) 00:14:06.592 fused_ordering(679) 00:14:06.592 fused_ordering(680) 00:14:06.592 fused_ordering(681) 00:14:06.592 fused_ordering(682) 00:14:06.592 fused_ordering(683) 00:14:06.592 fused_ordering(684) 00:14:06.592 fused_ordering(685) 00:14:06.592 fused_ordering(686) 00:14:06.592 fused_ordering(687) 00:14:06.592 fused_ordering(688) 00:14:06.592 fused_ordering(689) 00:14:06.592 fused_ordering(690) 00:14:06.592 fused_ordering(691) 00:14:06.592 fused_ordering(692) 00:14:06.592 fused_ordering(693) 00:14:06.592 fused_ordering(694) 00:14:06.592 fused_ordering(695) 00:14:06.592 fused_ordering(696) 00:14:06.592 fused_ordering(697) 00:14:06.592 fused_ordering(698) 00:14:06.592 fused_ordering(699) 00:14:06.592 fused_ordering(700) 00:14:06.592 fused_ordering(701) 00:14:06.592 fused_ordering(702) 00:14:06.592 fused_ordering(703) 00:14:06.592 fused_ordering(704) 00:14:06.592 fused_ordering(705) 00:14:06.592 fused_ordering(706) 00:14:06.592 fused_ordering(707) 00:14:06.592 fused_ordering(708) 00:14:06.592 fused_ordering(709) 00:14:06.592 fused_ordering(710) 00:14:06.592 fused_ordering(711) 00:14:06.592 fused_ordering(712) 00:14:06.592 fused_ordering(713) 00:14:06.592 fused_ordering(714) 00:14:06.592 fused_ordering(715) 00:14:06.592 fused_ordering(716) 00:14:06.592 fused_ordering(717) 00:14:06.592 fused_ordering(718) 00:14:06.592 fused_ordering(719) 00:14:06.592 fused_ordering(720) 00:14:06.592 fused_ordering(721) 00:14:06.592 fused_ordering(722) 00:14:06.592 fused_ordering(723) 00:14:06.592 fused_ordering(724) 00:14:06.592 fused_ordering(725) 00:14:06.592 fused_ordering(726) 00:14:06.592 fused_ordering(727) 00:14:06.592 fused_ordering(728) 00:14:06.592 fused_ordering(729) 00:14:06.592 fused_ordering(730) 00:14:06.592 fused_ordering(731) 00:14:06.592 fused_ordering(732) 00:14:06.592 fused_ordering(733) 00:14:06.592 fused_ordering(734) 00:14:06.592 fused_ordering(735) 00:14:06.592 fused_ordering(736) 00:14:06.592 fused_ordering(737) 00:14:06.592 fused_ordering(738) 00:14:06.592 fused_ordering(739) 00:14:06.592 fused_ordering(740) 00:14:06.592 fused_ordering(741) 00:14:06.592 fused_ordering(742) 00:14:06.592 fused_ordering(743) 00:14:06.592 fused_ordering(744) 00:14:06.592 fused_ordering(745) 00:14:06.592 fused_ordering(746) 00:14:06.592 fused_ordering(747) 00:14:06.592 fused_ordering(748) 00:14:06.592 fused_ordering(749) 00:14:06.592 fused_ordering(750) 00:14:06.592 fused_ordering(751) 00:14:06.592 fused_ordering(752) 00:14:06.592 fused_ordering(753) 00:14:06.592 fused_ordering(754) 00:14:06.592 fused_ordering(755) 00:14:06.592 fused_ordering(756) 00:14:06.592 fused_ordering(757) 00:14:06.592 fused_ordering(758) 00:14:06.592 fused_ordering(759) 00:14:06.592 fused_ordering(760) 00:14:06.592 fused_ordering(761) 00:14:06.592 fused_ordering(762) 00:14:06.592 fused_ordering(763) 00:14:06.592 fused_ordering(764) 00:14:06.592 fused_ordering(765) 00:14:06.592 fused_ordering(766) 00:14:06.592 fused_ordering(767) 00:14:06.592 fused_ordering(768) 00:14:06.592 fused_ordering(769) 00:14:06.592 fused_ordering(770) 00:14:06.592 fused_ordering(771) 00:14:06.592 fused_ordering(772) 00:14:06.592 fused_ordering(773) 00:14:06.592 fused_ordering(774) 00:14:06.592 fused_ordering(775) 00:14:06.592 fused_ordering(776) 00:14:06.592 fused_ordering(777) 00:14:06.592 fused_ordering(778) 00:14:06.592 fused_ordering(779) 00:14:06.592 fused_ordering(780) 00:14:06.592 fused_ordering(781) 00:14:06.592 fused_ordering(782) 00:14:06.592 fused_ordering(783) 00:14:06.592 fused_ordering(784) 00:14:06.592 fused_ordering(785) 00:14:06.592 fused_ordering(786) 00:14:06.592 fused_ordering(787) 00:14:06.592 fused_ordering(788) 00:14:06.592 fused_ordering(789) 00:14:06.592 fused_ordering(790) 00:14:06.592 fused_ordering(791) 00:14:06.592 fused_ordering(792) 00:14:06.592 fused_ordering(793) 00:14:06.592 fused_ordering(794) 00:14:06.592 fused_ordering(795) 00:14:06.592 fused_ordering(796) 00:14:06.592 fused_ordering(797) 00:14:06.592 fused_ordering(798) 00:14:06.592 fused_ordering(799) 00:14:06.592 fused_ordering(800) 00:14:06.592 fused_ordering(801) 00:14:06.592 fused_ordering(802) 00:14:06.592 fused_ordering(803) 00:14:06.592 fused_ordering(804) 00:14:06.592 fused_ordering(805) 00:14:06.592 fused_ordering(806) 00:14:06.592 fused_ordering(807) 00:14:06.592 fused_ordering(808) 00:14:06.592 fused_ordering(809) 00:14:06.592 fused_ordering(810) 00:14:06.592 fused_ordering(811) 00:14:06.592 fused_ordering(812) 00:14:06.592 fused_ordering(813) 00:14:06.592 fused_ordering(814) 00:14:06.592 fused_ordering(815) 00:14:06.592 fused_ordering(816) 00:14:06.592 fused_ordering(817) 00:14:06.592 fused_ordering(818) 00:14:06.592 fused_ordering(819) 00:14:06.592 fused_ordering(820) 00:14:07.162 fused_ordering(821) 00:14:07.162 fused_ordering(822) 00:14:07.162 fused_ordering(823) 00:14:07.162 fused_ordering(824) 00:14:07.162 fused_ordering(825) 00:14:07.162 fused_ordering(826) 00:14:07.162 fused_ordering(827) 00:14:07.162 fused_ordering(828) 00:14:07.162 fused_ordering(829) 00:14:07.162 fused_ordering(830) 00:14:07.162 fused_ordering(831) 00:14:07.162 fused_ordering(832) 00:14:07.162 fused_ordering(833) 00:14:07.162 fused_ordering(834) 00:14:07.162 fused_ordering(835) 00:14:07.162 fused_ordering(836) 00:14:07.162 fused_ordering(837) 00:14:07.162 fused_ordering(838) 00:14:07.162 fused_ordering(839) 00:14:07.162 fused_ordering(840) 00:14:07.162 fused_ordering(841) 00:14:07.162 fused_ordering(842) 00:14:07.162 fused_ordering(843) 00:14:07.162 fused_ordering(844) 00:14:07.162 fused_ordering(845) 00:14:07.162 fused_ordering(846) 00:14:07.162 fused_ordering(847) 00:14:07.162 fused_ordering(848) 00:14:07.162 fused_ordering(849) 00:14:07.163 fused_ordering(850) 00:14:07.163 fused_ordering(851) 00:14:07.163 fused_ordering(852) 00:14:07.163 fused_ordering(853) 00:14:07.163 fused_ordering(854) 00:14:07.163 fused_ordering(855) 00:14:07.163 fused_ordering(856) 00:14:07.163 fused_ordering(857) 00:14:07.163 fused_ordering(858) 00:14:07.163 fused_ordering(859) 00:14:07.163 fused_ordering(860) 00:14:07.163 fused_ordering(861) 00:14:07.163 fused_ordering(862) 00:14:07.163 fused_ordering(863) 00:14:07.163 fused_ordering(864) 00:14:07.163 fused_ordering(865) 00:14:07.163 fused_ordering(866) 00:14:07.163 fused_ordering(867) 00:14:07.163 fused_ordering(868) 00:14:07.163 fused_ordering(869) 00:14:07.163 fused_ordering(870) 00:14:07.163 fused_ordering(871) 00:14:07.163 fused_ordering(872) 00:14:07.163 fused_ordering(873) 00:14:07.163 fused_ordering(874) 00:14:07.163 fused_ordering(875) 00:14:07.163 fused_ordering(876) 00:14:07.163 fused_ordering(877) 00:14:07.163 fused_ordering(878) 00:14:07.163 fused_ordering(879) 00:14:07.163 fused_ordering(880) 00:14:07.163 fused_ordering(881) 00:14:07.163 fused_ordering(882) 00:14:07.163 fused_ordering(883) 00:14:07.163 fused_ordering(884) 00:14:07.163 fused_ordering(885) 00:14:07.163 fused_ordering(886) 00:14:07.163 fused_ordering(887) 00:14:07.163 fused_ordering(888) 00:14:07.163 fused_ordering(889) 00:14:07.163 fused_ordering(890) 00:14:07.163 fused_ordering(891) 00:14:07.163 fused_ordering(892) 00:14:07.163 fused_ordering(893) 00:14:07.163 fused_ordering(894) 00:14:07.163 fused_ordering(895) 00:14:07.163 fused_ordering(896) 00:14:07.163 fused_ordering(897) 00:14:07.163 fused_ordering(898) 00:14:07.163 fused_ordering(899) 00:14:07.163 fused_ordering(900) 00:14:07.163 fused_ordering(901) 00:14:07.163 fused_ordering(902) 00:14:07.163 fused_ordering(903) 00:14:07.163 fused_ordering(904) 00:14:07.163 fused_ordering(905) 00:14:07.163 fused_ordering(906) 00:14:07.163 fused_ordering(907) 00:14:07.163 fused_ordering(908) 00:14:07.163 fused_ordering(909) 00:14:07.163 fused_ordering(910) 00:14:07.163 fused_ordering(911) 00:14:07.163 fused_ordering(912) 00:14:07.163 fused_ordering(913) 00:14:07.163 fused_ordering(914) 00:14:07.163 fused_ordering(915) 00:14:07.163 fused_ordering(916) 00:14:07.163 fused_ordering(917) 00:14:07.163 fused_ordering(918) 00:14:07.163 fused_ordering(919) 00:14:07.163 fused_ordering(920) 00:14:07.163 fused_ordering(921) 00:14:07.163 fused_ordering(922) 00:14:07.163 fused_ordering(923) 00:14:07.163 fused_ordering(924) 00:14:07.163 fused_ordering(925) 00:14:07.163 fused_ordering(926) 00:14:07.163 fused_ordering(927) 00:14:07.163 fused_ordering(928) 00:14:07.163 fused_ordering(929) 00:14:07.163 fused_ordering(930) 00:14:07.163 fused_ordering(931) 00:14:07.163 fused_ordering(932) 00:14:07.163 fused_ordering(933) 00:14:07.163 fused_ordering(934) 00:14:07.163 fused_ordering(935) 00:14:07.163 fused_ordering(936) 00:14:07.163 fused_ordering(937) 00:14:07.163 fused_ordering(938) 00:14:07.163 fused_ordering(939) 00:14:07.163 fused_ordering(940) 00:14:07.163 fused_ordering(941) 00:14:07.163 fused_ordering(942) 00:14:07.163 fused_ordering(943) 00:14:07.163 fused_ordering(944) 00:14:07.163 fused_ordering(945) 00:14:07.163 fused_ordering(946) 00:14:07.163 fused_ordering(947) 00:14:07.163 fused_ordering(948) 00:14:07.163 fused_ordering(949) 00:14:07.163 fused_ordering(950) 00:14:07.163 fused_ordering(951) 00:14:07.163 fused_ordering(952) 00:14:07.163 fused_ordering(953) 00:14:07.163 fused_ordering(954) 00:14:07.163 fused_ordering(955) 00:14:07.163 fused_ordering(956) 00:14:07.163 fused_ordering(957) 00:14:07.163 fused_ordering(958) 00:14:07.163 fused_ordering(959) 00:14:07.163 fused_ordering(960) 00:14:07.163 fused_ordering(961) 00:14:07.163 fused_ordering(962) 00:14:07.163 fused_ordering(963) 00:14:07.163 fused_ordering(964) 00:14:07.163 fused_ordering(965) 00:14:07.163 fused_ordering(966) 00:14:07.163 fused_ordering(967) 00:14:07.163 fused_ordering(968) 00:14:07.163 fused_ordering(969) 00:14:07.163 fused_ordering(970) 00:14:07.163 fused_ordering(971) 00:14:07.163 fused_ordering(972) 00:14:07.163 fused_ordering(973) 00:14:07.163 fused_ordering(974) 00:14:07.163 fused_ordering(975) 00:14:07.163 fused_ordering(976) 00:14:07.163 fused_ordering(977) 00:14:07.163 fused_ordering(978) 00:14:07.163 fused_ordering(979) 00:14:07.163 fused_ordering(980) 00:14:07.163 fused_ordering(981) 00:14:07.163 fused_ordering(982) 00:14:07.163 fused_ordering(983) 00:14:07.163 fused_ordering(984) 00:14:07.163 fused_ordering(985) 00:14:07.163 fused_ordering(986) 00:14:07.163 fused_ordering(987) 00:14:07.163 fused_ordering(988) 00:14:07.163 fused_ordering(989) 00:14:07.163 fused_ordering(990) 00:14:07.163 fused_ordering(991) 00:14:07.163 fused_ordering(992) 00:14:07.163 fused_ordering(993) 00:14:07.163 fused_ordering(994) 00:14:07.163 fused_ordering(995) 00:14:07.163 fused_ordering(996) 00:14:07.163 fused_ordering(997) 00:14:07.163 fused_ordering(998) 00:14:07.163 fused_ordering(999) 00:14:07.163 fused_ordering(1000) 00:14:07.163 fused_ordering(1001) 00:14:07.163 fused_ordering(1002) 00:14:07.163 fused_ordering(1003) 00:14:07.163 fused_ordering(1004) 00:14:07.163 fused_ordering(1005) 00:14:07.163 fused_ordering(1006) 00:14:07.163 fused_ordering(1007) 00:14:07.163 fused_ordering(1008) 00:14:07.163 fused_ordering(1009) 00:14:07.163 fused_ordering(1010) 00:14:07.163 fused_ordering(1011) 00:14:07.163 fused_ordering(1012) 00:14:07.163 fused_ordering(1013) 00:14:07.163 fused_ordering(1014) 00:14:07.163 fused_ordering(1015) 00:14:07.163 fused_ordering(1016) 00:14:07.163 fused_ordering(1017) 00:14:07.163 fused_ordering(1018) 00:14:07.163 fused_ordering(1019) 00:14:07.163 fused_ordering(1020) 00:14:07.163 fused_ordering(1021) 00:14:07.163 fused_ordering(1022) 00:14:07.163 fused_ordering(1023) 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.163 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.163 rmmod nvme_tcp 00:14:07.424 rmmod nvme_fabrics 00:14:07.424 rmmod nvme_keyring 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2876009 ']' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2876009 ']' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2876009' 00:14:07.424 killing process with pid 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2876009 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.424 19:04:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.969 00:14:09.969 real 0m13.538s 00:14:09.969 user 0m7.166s 00:14:09.969 sys 0m7.271s 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.969 ************************************ 00:14:09.969 END TEST nvmf_fused_ordering 00:14:09.969 ************************************ 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.969 ************************************ 00:14:09.969 START TEST nvmf_ns_masking 00:14:09.969 ************************************ 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.969 * Looking for test storage... 00:14:09.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.969 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.970 --rc genhtml_branch_coverage=1 00:14:09.970 --rc genhtml_function_coverage=1 00:14:09.970 --rc genhtml_legend=1 00:14:09.970 --rc geninfo_all_blocks=1 00:14:09.970 --rc geninfo_unexecuted_blocks=1 00:14:09.970 00:14:09.970 ' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.970 --rc genhtml_branch_coverage=1 00:14:09.970 --rc genhtml_function_coverage=1 00:14:09.970 --rc genhtml_legend=1 00:14:09.970 --rc geninfo_all_blocks=1 00:14:09.970 --rc geninfo_unexecuted_blocks=1 00:14:09.970 00:14:09.970 ' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.970 --rc genhtml_branch_coverage=1 00:14:09.970 --rc genhtml_function_coverage=1 00:14:09.970 --rc genhtml_legend=1 00:14:09.970 --rc geninfo_all_blocks=1 00:14:09.970 --rc geninfo_unexecuted_blocks=1 00:14:09.970 00:14:09.970 ' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.970 --rc genhtml_branch_coverage=1 00:14:09.970 --rc genhtml_function_coverage=1 00:14:09.970 --rc genhtml_legend=1 00:14:09.970 --rc geninfo_all_blocks=1 00:14:09.970 --rc geninfo_unexecuted_blocks=1 00:14:09.970 00:14:09.970 ' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:09.970 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:09.971 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c6c46def-7021-47cb-ba36-1a0a9e8c1890 00:14:09.971 19:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=435cbaeb-b55b-46ef-821f-6d7e3bd39a5e 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b484065e-607a-4cbe-91cf-4c4c6779f5c1 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:09.971 19:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.113 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:18.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:18.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:18.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:18.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:14:18.114 00:14:18.114 --- 10.0.0.2 ping statistics --- 00:14:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.114 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:14:18.114 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:14:18.114 00:14:18.114 --- 10.0.0.1 ping statistics --- 00:14:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.114 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2880783 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2880783 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2880783 ']' 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.115 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.115 [2024-11-26 19:04:34.631768] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:18.115 [2024-11-26 19:04:34.631834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.115 [2024-11-26 19:04:34.731514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.115 [2024-11-26 19:04:34.782992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.115 [2024-11-26 19:04:34.783046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.115 [2024-11-26 19:04:34.783054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.115 [2024-11-26 19:04:34.783062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.115 [2024-11-26 19:04:34.783068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.115 [2024-11-26 19:04:34.783856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.376 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.637 [2024-11-26 19:04:35.655853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.637 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:18.637 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:18.637 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.898 Malloc1 00:14:18.898 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:18.898 Malloc2 00:14:18.898 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.159 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:19.419 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.683 [2024-11-26 19:04:36.634842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b484065e-607a-4cbe-91cf-4c4c6779f5c1 -a 10.0.0.2 -s 4420 -i 4 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.683 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.600 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.861 [ 0]:0x1 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b54320984f0415f9ee5f74af57c34bb 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b54320984f0415f9ee5f74af57c34bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.861 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.122 [ 0]:0x1 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b54320984f0415f9ee5f74af57c34bb 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b54320984f0415f9ee5f74af57c34bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.122 [ 1]:0x2 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:22.122 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.383 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.383 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b484065e-607a-4cbe-91cf-4c4c6779f5c1 -a 10.0.0.2 -s 4420 -i 4 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:22.644 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.189 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.190 [ 0]:0x2 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.190 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.190 [ 0]:0x1 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b54320984f0415f9ee5f74af57c34bb 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b54320984f0415f9ee5f74af57c34bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.190 [ 1]:0x2 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.190 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.450 [ 0]:0x2 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:25.450 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.710 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.710 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:25.710 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b484065e-607a-4cbe-91cf-4c4c6779f5c1 -a 10.0.0.2 -s 4420 -i 4 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:25.971 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.540 [ 0]:0x1 00:14:28.540 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0b54320984f0415f9ee5f74af57c34bb 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0b54320984f0415f9ee5f74af57c34bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.541 [ 1]:0x2 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.541 [ 0]:0x2 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:28.541 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.844 [2024-11-26 19:04:45.795805] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:28.844 request: 00:14:28.844 { 00:14:28.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.844 "nsid": 2, 00:14:28.844 "host": "nqn.2016-06.io.spdk:host1", 00:14:28.844 "method": "nvmf_ns_remove_host", 00:14:28.844 "req_id": 1 00:14:28.844 } 00:14:28.844 Got JSON-RPC error response 00:14:28.844 response: 00:14:28.844 { 00:14:28.844 "code": -32602, 00:14:28.844 "message": "Invalid parameters" 00:14:28.844 } 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.844 [ 0]:0x2 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d93db3825cfe42b89498ba40fbccf0bc 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d93db3825cfe42b89498ba40fbccf0bc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:28.844 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2883236 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2883236 /var/tmp/host.sock 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2883236 ']' 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.844 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.144 [2024-11-26 19:04:46.077383] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:29.144 [2024-11-26 19:04:46.077438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883236 ] 00:14:29.144 [2024-11-26 19:04:46.164767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.144 [2024-11-26 19:04:46.200212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.715 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.715 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:29.715 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.975 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c6c46def-7021-47cb-ba36-1a0a9e8c1890 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C6C46DEF702147CBBA361A0A9E8C1890 -i 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 435cbaeb-b55b-46ef-821f-6d7e3bd39a5e 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:30.236 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 435CBAEBB55B46EF821F6D7E3BD39A5E -i 00:14:30.496 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:30.756 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:31.017 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.017 19:04:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.278 nvme0n1 00:14:31.278 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.278 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.538 nvme1n2 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:31.538 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c6c46def-7021-47cb-ba36-1a0a9e8c1890 == \c\6\c\4\6\d\e\f\-\7\0\2\1\-\4\7\c\b\-\b\a\3\6\-\1\a\0\a\9\e\8\c\1\8\9\0 ]] 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:31.800 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:32.061 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 435cbaeb-b55b-46ef-821f-6d7e3bd39a5e == \4\3\5\c\b\a\e\b\-\b\5\5\b\-\4\6\e\f\-\8\2\1\f\-\6\d\7\e\3\b\d\3\9\a\5\e ]] 00:14:32.061 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c6c46def-7021-47cb-ba36-1a0a9e8c1890 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C6C46DEF702147CBBA361A0A9E8C1890 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C6C46DEF702147CBBA361A0A9E8C1890 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.320 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C6C46DEF702147CBBA361A0A9E8C1890 00:14:32.581 [2024-11-26 19:04:49.665954] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:32.581 [2024-11-26 19:04:49.665982] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:32.581 [2024-11-26 19:04:49.665989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.581 request: 00:14:32.581 { 00:14:32.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.581 "namespace": { 00:14:32.581 "bdev_name": "invalid", 00:14:32.581 "nsid": 1, 00:14:32.581 "nguid": "C6C46DEF702147CBBA361A0A9E8C1890", 00:14:32.581 "no_auto_visible": false, 00:14:32.581 "hide_metadata": false 00:14:32.581 }, 00:14:32.581 "method": "nvmf_subsystem_add_ns", 00:14:32.581 "req_id": 1 00:14:32.581 } 00:14:32.581 Got JSON-RPC error response 00:14:32.581 response: 00:14:32.581 { 00:14:32.581 "code": -32602, 00:14:32.581 "message": "Invalid parameters" 00:14:32.581 } 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c6c46def-7021-47cb-ba36-1a0a9e8c1890 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:32.581 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C6C46DEF702147CBBA361A0A9E8C1890 -i 00:14:32.843 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:34.754 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:34.754 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:34.754 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2883236 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2883236 ']' 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2883236 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2883236 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2883236' 00:14:35.015 killing process with pid 2883236 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2883236 00:14:35.015 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2883236 00:14:35.275 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.536 rmmod nvme_tcp 00:14:35.536 rmmod nvme_fabrics 00:14:35.536 rmmod nvme_keyring 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2880783 ']' 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2880783 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2880783 ']' 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2880783 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880783 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880783' 00:14:35.536 killing process with pid 2880783 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2880783 00:14:35.536 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2880783 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.797 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.706 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.706 00:14:37.706 real 0m28.119s 00:14:37.706 user 0m32.027s 00:14:37.706 sys 0m8.340s 00:14:37.706 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.706 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.706 ************************************ 00:14:37.706 END TEST nvmf_ns_masking 00:14:37.706 ************************************ 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.967 ************************************ 00:14:37.967 START TEST nvmf_nvme_cli 00:14:37.967 ************************************ 00:14:37.967 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.967 * Looking for test storage... 00:14:37.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.967 --rc genhtml_branch_coverage=1 00:14:37.967 --rc genhtml_function_coverage=1 00:14:37.967 --rc genhtml_legend=1 00:14:37.967 --rc geninfo_all_blocks=1 00:14:37.967 --rc geninfo_unexecuted_blocks=1 00:14:37.967 00:14:37.967 ' 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.967 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.227 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:38.228 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.366 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:14:46.367 00:14:46.367 --- 10.0.0.2 ping statistics --- 00:14:46.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.367 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:14:46.367 00:14:46.367 --- 10.0.0.1 ping statistics --- 00:14:46.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.367 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2888701 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2888701 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2888701 ']' 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.367 19:05:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.367 [2024-11-26 19:05:02.832545] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:46.367 [2024-11-26 19:05:02.832609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.367 [2024-11-26 19:05:02.935021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.367 [2024-11-26 19:05:02.990489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.368 [2024-11-26 19:05:02.990550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.368 [2024-11-26 19:05:02.990559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.368 [2024-11-26 19:05:02.990566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.368 [2024-11-26 19:05:02.990573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.368 [2024-11-26 19:05:02.992664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.368 [2024-11-26 19:05:02.992823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.368 [2024-11-26 19:05:02.992986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.368 [2024-11-26 19:05:02.992987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 [2024-11-26 19:05:03.715299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 Malloc0 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 Malloc1 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.629 [2024-11-26 19:05:03.824548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.629 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:46.890 00:14:46.890 Discovery Log Number of Records 2, Generation counter 2 00:14:46.890 =====Discovery Log Entry 0====== 00:14:46.890 trtype: tcp 00:14:46.890 adrfam: ipv4 00:14:46.890 subtype: current discovery subsystem 00:14:46.890 treq: not required 00:14:46.890 portid: 0 00:14:46.890 trsvcid: 4420 00:14:46.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:46.890 traddr: 10.0.0.2 00:14:46.890 eflags: explicit discovery connections, duplicate discovery information 00:14:46.890 sectype: none 00:14:46.890 =====Discovery Log Entry 1====== 00:14:46.890 trtype: tcp 00:14:46.890 adrfam: ipv4 00:14:46.890 subtype: nvme subsystem 00:14:46.890 treq: not required 00:14:46.890 portid: 0 00:14:46.890 trsvcid: 4420 00:14:46.890 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:46.890 traddr: 10.0.0.2 00:14:46.890 eflags: none 00:14:46.890 sectype: none 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:46.890 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:48.833 19:05:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:50.746 /dev/nvme0n2 ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:50.746 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.747 rmmod nvme_tcp 00:14:50.747 rmmod nvme_fabrics 00:14:50.747 rmmod nvme_keyring 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2888701 ']' 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2888701 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2888701 ']' 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2888701 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888701 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888701' 00:14:50.747 killing process with pid 2888701 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2888701 00:14:50.747 19:05:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2888701 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.008 19:05:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:53.563 00:14:53.563 real 0m15.190s 00:14:53.563 user 0m22.564s 00:14:53.563 sys 0m6.406s 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 ************************************ 00:14:53.563 END TEST nvmf_nvme_cli 00:14:53.563 ************************************ 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 ************************************ 00:14:53.563 START TEST nvmf_vfio_user 00:14:53.563 ************************************ 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:53.563 * Looking for test storage... 00:14:53.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.563 --rc genhtml_branch_coverage=1 00:14:53.563 --rc genhtml_function_coverage=1 00:14:53.563 --rc genhtml_legend=1 00:14:53.563 --rc geninfo_all_blocks=1 00:14:53.563 --rc geninfo_unexecuted_blocks=1 00:14:53.563 00:14:53.563 ' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.563 --rc genhtml_branch_coverage=1 00:14:53.563 --rc genhtml_function_coverage=1 00:14:53.563 --rc genhtml_legend=1 00:14:53.563 --rc geninfo_all_blocks=1 00:14:53.563 --rc geninfo_unexecuted_blocks=1 00:14:53.563 00:14:53.563 ' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.563 --rc genhtml_branch_coverage=1 00:14:53.563 --rc genhtml_function_coverage=1 00:14:53.563 --rc genhtml_legend=1 00:14:53.563 --rc geninfo_all_blocks=1 00:14:53.563 --rc geninfo_unexecuted_blocks=1 00:14:53.563 00:14:53.563 ' 00:14:53.563 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.563 --rc genhtml_branch_coverage=1 00:14:53.563 --rc genhtml_function_coverage=1 00:14:53.563 --rc genhtml_legend=1 00:14:53.563 --rc geninfo_all_blocks=1 00:14:53.563 --rc geninfo_unexecuted_blocks=1 00:14:53.563 00:14:53.563 ' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2890446 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2890446' 00:14:53.564 Process pid: 2890446 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2890446 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2890446 ']' 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.564 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:53.564 [2024-11-26 19:05:10.550477] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:53.564 [2024-11-26 19:05:10.550558] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.564 [2024-11-26 19:05:10.637245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.564 [2024-11-26 19:05:10.670299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.564 [2024-11-26 19:05:10.670330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.564 [2024-11-26 19:05:10.670336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.564 [2024-11-26 19:05:10.670341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.564 [2024-11-26 19:05:10.670345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.564 [2024-11-26 19:05:10.671713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.564 [2024-11-26 19:05:10.671864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.564 [2024-11-26 19:05:10.672012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.564 [2024-11-26 19:05:10.672014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.135 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.135 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:54.135 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:55.535 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.535 Malloc1 00:14:55.796 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:55.796 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:56.057 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:56.318 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.318 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:56.318 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:56.318 Malloc2 00:14:56.318 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:56.579 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:56.840 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:56.840 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:56.840 [2024-11-26 19:05:14.040364] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:14:56.840 [2024-11-26 19:05:14.040387] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891133 ] 00:14:57.103 [2024-11-26 19:05:14.073683] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:57.103 [2024-11-26 19:05:14.086502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.103 [2024-11-26 19:05:14.086520] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f907231e000 00:14:57.103 [2024-11-26 19:05:14.087499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.103 [2024-11-26 19:05:14.088501] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.103 [2024-11-26 19:05:14.089511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.103 [2024-11-26 19:05:14.090514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.103 [2024-11-26 19:05:14.091526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.103 [2024-11-26 19:05:14.092534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.104 [2024-11-26 19:05:14.093535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:57.104 [2024-11-26 19:05:14.094539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:57.104 [2024-11-26 19:05:14.095544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:57.104 [2024-11-26 19:05:14.095550] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9072313000 00:14:57.104 [2024-11-26 19:05:14.096461] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.104 [2024-11-26 19:05:14.105902] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:57.104 [2024-11-26 19:05:14.105935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:57.104 [2024-11-26 19:05:14.110633] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.104 [2024-11-26 19:05:14.110665] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:57.104 [2024-11-26 19:05:14.110729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:57.104 [2024-11-26 19:05:14.110744] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:57.104 [2024-11-26 19:05:14.110749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:57.104 [2024-11-26 19:05:14.111635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:57.104 [2024-11-26 19:05:14.111644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:57.104 [2024-11-26 19:05:14.111652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:57.104 [2024-11-26 19:05:14.112637] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:57.104 [2024-11-26 19:05:14.112643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:57.104 [2024-11-26 19:05:14.112648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.113649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:57.104 [2024-11-26 19:05:14.113655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.114654] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:57.104 [2024-11-26 19:05:14.114660] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:57.104 [2024-11-26 19:05:14.114664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.114668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.114775] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:57.104 [2024-11-26 19:05:14.114778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.114782] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:57.104 [2024-11-26 19:05:14.115660] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:57.104 [2024-11-26 19:05:14.116669] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:57.104 [2024-11-26 19:05:14.117673] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.104 [2024-11-26 19:05:14.118673] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.104 [2024-11-26 19:05:14.118726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:57.104 [2024-11-26 19:05:14.119686] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:57.104 [2024-11-26 19:05:14.119692] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:57.104 [2024-11-26 19:05:14.119696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:57.104 [2024-11-26 19:05:14.119716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119731] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.104 [2024-11-26 19:05:14.119737] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.104 [2024-11-26 19:05:14.119740] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.104 [2024-11-26 19:05:14.119751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.104 [2024-11-26 19:05:14.119788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:57.104 [2024-11-26 19:05:14.119795] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:57.104 [2024-11-26 19:05:14.119799] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:57.104 [2024-11-26 19:05:14.119802] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:57.104 [2024-11-26 19:05:14.119806] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:57.104 [2024-11-26 19:05:14.119809] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:57.104 [2024-11-26 19:05:14.119812] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:57.104 [2024-11-26 19:05:14.119816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:57.104 [2024-11-26 19:05:14.119838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:57.104 [2024-11-26 19:05:14.119847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.104 [2024-11-26 19:05:14.119853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.104 [2024-11-26 19:05:14.119859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.104 [2024-11-26 19:05:14.119865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.104 [2024-11-26 19:05:14.119868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:57.104 [2024-11-26 19:05:14.119889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:57.104 [2024-11-26 19:05:14.119893] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:57.104 [2024-11-26 19:05:14.119897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.104 [2024-11-26 19:05:14.119929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:57.104 [2024-11-26 19:05:14.119975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.119986] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:57.104 [2024-11-26 19:05:14.119989] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:57.104 [2024-11-26 19:05:14.119991] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.104 [2024-11-26 19:05:14.119996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:57.104 [2024-11-26 19:05:14.120012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:57.104 [2024-11-26 19:05:14.120021] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:57.104 [2024-11-26 19:05:14.120027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:57.104 [2024-11-26 19:05:14.120033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120038] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.105 [2024-11-26 19:05:14.120041] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.105 [2024-11-26 19:05:14.120044] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.105 [2024-11-26 19:05:14.120048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:57.105 [2024-11-26 19:05:14.120084] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.105 [2024-11-26 19:05:14.120086] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.105 [2024-11-26 19:05:14.120091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120136] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:57.105 [2024-11-26 19:05:14.120139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:57.105 [2024-11-26 19:05:14.120142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:57.105 [2024-11-26 19:05:14.120156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:57.105 [2024-11-26 19:05:14.120227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:57.105 [2024-11-26 19:05:14.120230] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:57.105 [2024-11-26 19:05:14.120232] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:57.105 [2024-11-26 19:05:14.120234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:57.105 [2024-11-26 19:05:14.120239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:57.105 [2024-11-26 19:05:14.120244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:57.105 [2024-11-26 19:05:14.120247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:57.105 [2024-11-26 19:05:14.120250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.105 [2024-11-26 19:05:14.120254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120259] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:57.105 [2024-11-26 19:05:14.120262] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:57.105 [2024-11-26 19:05:14.120266] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.105 [2024-11-26 19:05:14.120270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120275] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:57.105 [2024-11-26 19:05:14.120278] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:57.105 [2024-11-26 19:05:14.120281] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:57.105 [2024-11-26 19:05:14.120285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:57.105 [2024-11-26 19:05:14.120290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:57.105 [2024-11-26 19:05:14.120311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:57.105 ===================================================== 00:14:57.105 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.105 ===================================================== 00:14:57.105 Controller Capabilities/Features 00:14:57.105 ================================ 00:14:57.105 Vendor ID: 4e58 00:14:57.105 Subsystem Vendor ID: 4e58 00:14:57.105 Serial Number: SPDK1 00:14:57.105 Model Number: SPDK bdev Controller 00:14:57.105 Firmware Version: 25.01 00:14:57.105 Recommended Arb Burst: 6 00:14:57.105 IEEE OUI Identifier: 8d 6b 50 00:14:57.105 Multi-path I/O 00:14:57.105 May have multiple subsystem ports: Yes 00:14:57.105 May have multiple controllers: Yes 00:14:57.105 Associated with SR-IOV VF: No 00:14:57.105 Max Data Transfer Size: 131072 00:14:57.105 Max Number of Namespaces: 32 00:14:57.105 Max Number of I/O Queues: 127 00:14:57.105 NVMe Specification Version (VS): 1.3 00:14:57.105 NVMe Specification Version (Identify): 1.3 00:14:57.105 Maximum Queue Entries: 256 00:14:57.105 Contiguous Queues Required: Yes 00:14:57.105 Arbitration Mechanisms Supported 00:14:57.105 Weighted Round Robin: Not Supported 00:14:57.105 Vendor Specific: Not Supported 00:14:57.105 Reset Timeout: 15000 ms 00:14:57.105 Doorbell Stride: 4 bytes 00:14:57.105 NVM Subsystem Reset: Not Supported 00:14:57.105 Command Sets Supported 00:14:57.105 NVM Command Set: Supported 00:14:57.105 Boot Partition: Not Supported 00:14:57.105 Memory Page Size Minimum: 4096 bytes 00:14:57.105 Memory Page Size Maximum: 4096 bytes 00:14:57.105 Persistent Memory Region: Not Supported 00:14:57.105 Optional Asynchronous Events Supported 00:14:57.105 Namespace Attribute Notices: Supported 00:14:57.105 Firmware Activation Notices: Not Supported 00:14:57.105 ANA Change Notices: Not Supported 00:14:57.105 PLE Aggregate Log Change Notices: Not Supported 00:14:57.105 LBA Status Info Alert Notices: Not Supported 00:14:57.105 EGE Aggregate Log Change Notices: Not Supported 00:14:57.105 Normal NVM Subsystem Shutdown event: Not Supported 00:14:57.105 Zone Descriptor Change Notices: Not Supported 00:14:57.105 Discovery Log Change Notices: Not Supported 00:14:57.105 Controller Attributes 00:14:57.105 128-bit Host Identifier: Supported 00:14:57.105 Non-Operational Permissive Mode: Not Supported 00:14:57.105 NVM Sets: Not Supported 00:14:57.105 Read Recovery Levels: Not Supported 00:14:57.105 Endurance Groups: Not Supported 00:14:57.105 Predictable Latency Mode: Not Supported 00:14:57.105 Traffic Based Keep ALive: Not Supported 00:14:57.105 Namespace Granularity: Not Supported 00:14:57.105 SQ Associations: Not Supported 00:14:57.105 UUID List: Not Supported 00:14:57.105 Multi-Domain Subsystem: Not Supported 00:14:57.105 Fixed Capacity Management: Not Supported 00:14:57.105 Variable Capacity Management: Not Supported 00:14:57.105 Delete Endurance Group: Not Supported 00:14:57.105 Delete NVM Set: Not Supported 00:14:57.105 Extended LBA Formats Supported: Not Supported 00:14:57.105 Flexible Data Placement Supported: Not Supported 00:14:57.105 00:14:57.105 Controller Memory Buffer Support 00:14:57.105 ================================ 00:14:57.105 Supported: No 00:14:57.105 00:14:57.105 Persistent Memory Region Support 00:14:57.105 ================================ 00:14:57.106 Supported: No 00:14:57.106 00:14:57.106 Admin Command Set Attributes 00:14:57.106 ============================ 00:14:57.106 Security Send/Receive: Not Supported 00:14:57.106 Format NVM: Not Supported 00:14:57.106 Firmware Activate/Download: Not Supported 00:14:57.106 Namespace Management: Not Supported 00:14:57.106 Device Self-Test: Not Supported 00:14:57.106 Directives: Not Supported 00:14:57.106 NVMe-MI: Not Supported 00:14:57.106 Virtualization Management: Not Supported 00:14:57.106 Doorbell Buffer Config: Not Supported 00:14:57.106 Get LBA Status Capability: Not Supported 00:14:57.106 Command & Feature Lockdown Capability: Not Supported 00:14:57.106 Abort Command Limit: 4 00:14:57.106 Async Event Request Limit: 4 00:14:57.106 Number of Firmware Slots: N/A 00:14:57.106 Firmware Slot 1 Read-Only: N/A 00:14:57.106 Firmware Activation Without Reset: N/A 00:14:57.106 Multiple Update Detection Support: N/A 00:14:57.106 Firmware Update Granularity: No Information Provided 00:14:57.106 Per-Namespace SMART Log: No 00:14:57.106 Asymmetric Namespace Access Log Page: Not Supported 00:14:57.106 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:57.106 Command Effects Log Page: Supported 00:14:57.106 Get Log Page Extended Data: Supported 00:14:57.106 Telemetry Log Pages: Not Supported 00:14:57.106 Persistent Event Log Pages: Not Supported 00:14:57.106 Supported Log Pages Log Page: May Support 00:14:57.106 Commands Supported & Effects Log Page: Not Supported 00:14:57.106 Feature Identifiers & Effects Log Page:May Support 00:14:57.106 NVMe-MI Commands & Effects Log Page: May Support 00:14:57.106 Data Area 4 for Telemetry Log: Not Supported 00:14:57.106 Error Log Page Entries Supported: 128 00:14:57.106 Keep Alive: Supported 00:14:57.106 Keep Alive Granularity: 10000 ms 00:14:57.106 00:14:57.106 NVM Command Set Attributes 00:14:57.106 ========================== 00:14:57.106 Submission Queue Entry Size 00:14:57.106 Max: 64 00:14:57.106 Min: 64 00:14:57.106 Completion Queue Entry Size 00:14:57.106 Max: 16 00:14:57.106 Min: 16 00:14:57.106 Number of Namespaces: 32 00:14:57.106 Compare Command: Supported 00:14:57.106 Write Uncorrectable Command: Not Supported 00:14:57.106 Dataset Management Command: Supported 00:14:57.106 Write Zeroes Command: Supported 00:14:57.106 Set Features Save Field: Not Supported 00:14:57.106 Reservations: Not Supported 00:14:57.106 Timestamp: Not Supported 00:14:57.106 Copy: Supported 00:14:57.106 Volatile Write Cache: Present 00:14:57.106 Atomic Write Unit (Normal): 1 00:14:57.106 Atomic Write Unit (PFail): 1 00:14:57.106 Atomic Compare & Write Unit: 1 00:14:57.106 Fused Compare & Write: Supported 00:14:57.106 Scatter-Gather List 00:14:57.106 SGL Command Set: Supported (Dword aligned) 00:14:57.106 SGL Keyed: Not Supported 00:14:57.106 SGL Bit Bucket Descriptor: Not Supported 00:14:57.106 SGL Metadata Pointer: Not Supported 00:14:57.106 Oversized SGL: Not Supported 00:14:57.106 SGL Metadata Address: Not Supported 00:14:57.106 SGL Offset: Not Supported 00:14:57.106 Transport SGL Data Block: Not Supported 00:14:57.106 Replay Protected Memory Block: Not Supported 00:14:57.106 00:14:57.106 Firmware Slot Information 00:14:57.106 ========================= 00:14:57.106 Active slot: 1 00:14:57.106 Slot 1 Firmware Revision: 25.01 00:14:57.106 00:14:57.106 00:14:57.106 Commands Supported and Effects 00:14:57.106 ============================== 00:14:57.106 Admin Commands 00:14:57.106 -------------- 00:14:57.106 Get Log Page (02h): Supported 00:14:57.106 Identify (06h): Supported 00:14:57.106 Abort (08h): Supported 00:14:57.106 Set Features (09h): Supported 00:14:57.106 Get Features (0Ah): Supported 00:14:57.106 Asynchronous Event Request (0Ch): Supported 00:14:57.106 Keep Alive (18h): Supported 00:14:57.106 I/O Commands 00:14:57.106 ------------ 00:14:57.106 Flush (00h): Supported LBA-Change 00:14:57.106 Write (01h): Supported LBA-Change 00:14:57.106 Read (02h): Supported 00:14:57.106 Compare (05h): Supported 00:14:57.106 Write Zeroes (08h): Supported LBA-Change 00:14:57.106 Dataset Management (09h): Supported LBA-Change 00:14:57.106 Copy (19h): Supported LBA-Change 00:14:57.106 00:14:57.106 Error Log 00:14:57.106 ========= 00:14:57.106 00:14:57.106 Arbitration 00:14:57.106 =========== 00:14:57.106 Arbitration Burst: 1 00:14:57.106 00:14:57.106 Power Management 00:14:57.106 ================ 00:14:57.106 Number of Power States: 1 00:14:57.106 Current Power State: Power State #0 00:14:57.106 Power State #0: 00:14:57.106 Max Power: 0.00 W 00:14:57.106 Non-Operational State: Operational 00:14:57.106 Entry Latency: Not Reported 00:14:57.106 Exit Latency: Not Reported 00:14:57.106 Relative Read Throughput: 0 00:14:57.106 Relative Read Latency: 0 00:14:57.106 Relative Write Throughput: 0 00:14:57.106 Relative Write Latency: 0 00:14:57.106 Idle Power: Not Reported 00:14:57.106 Active Power: Not Reported 00:14:57.106 Non-Operational Permissive Mode: Not Supported 00:14:57.106 00:14:57.106 Health Information 00:14:57.106 ================== 00:14:57.106 Critical Warnings: 00:14:57.106 Available Spare Space: OK 00:14:57.106 Temperature: OK 00:14:57.106 Device Reliability: OK 00:14:57.106 Read Only: No 00:14:57.106 Volatile Memory Backup: OK 00:14:57.106 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:57.106 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:57.106 Available Spare: 0% 00:14:57.106 Available Sp[2024-11-26 19:05:14.120384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:57.106 [2024-11-26 19:05:14.120390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:57.106 [2024-11-26 19:05:14.120460] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:57.106 [2024-11-26 19:05:14.120467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.106 [2024-11-26 19:05:14.120471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.106 [2024-11-26 19:05:14.120476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.106 [2024-11-26 19:05:14.120480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.106 [2024-11-26 19:05:14.120696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:57.106 [2024-11-26 19:05:14.120704] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:57.106 [2024-11-26 19:05:14.121698] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.106 [2024-11-26 19:05:14.121739] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:57.106 [2024-11-26 19:05:14.121743] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:57.106 [2024-11-26 19:05:14.122713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:57.106 [2024-11-26 19:05:14.122721] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:57.106 [2024-11-26 19:05:14.122775] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:57.106 [2024-11-26 19:05:14.125164] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:57.106 are Threshold: 0% 00:14:57.106 Life Percentage Used: 0% 00:14:57.106 Data Units Read: 0 00:14:57.106 Data Units Written: 0 00:14:57.106 Host Read Commands: 0 00:14:57.106 Host Write Commands: 0 00:14:57.106 Controller Busy Time: 0 minutes 00:14:57.106 Power Cycles: 0 00:14:57.106 Power On Hours: 0 hours 00:14:57.106 Unsafe Shutdowns: 0 00:14:57.106 Unrecoverable Media Errors: 0 00:14:57.106 Lifetime Error Log Entries: 0 00:14:57.106 Warning Temperature Time: 0 minutes 00:14:57.106 Critical Temperature Time: 0 minutes 00:14:57.106 00:14:57.106 Number of Queues 00:14:57.106 ================ 00:14:57.106 Number of I/O Submission Queues: 127 00:14:57.106 Number of I/O Completion Queues: 127 00:14:57.106 00:14:57.106 Active Namespaces 00:14:57.106 ================= 00:14:57.106 Namespace ID:1 00:14:57.106 Error Recovery Timeout: Unlimited 00:14:57.106 Command Set Identifier: NVM (00h) 00:14:57.106 Deallocate: Supported 00:14:57.106 Deallocated/Unwritten Error: Not Supported 00:14:57.106 Deallocated Read Value: Unknown 00:14:57.106 Deallocate in Write Zeroes: Not Supported 00:14:57.106 Deallocated Guard Field: 0xFFFF 00:14:57.106 Flush: Supported 00:14:57.106 Reservation: Supported 00:14:57.106 Namespace Sharing Capabilities: Multiple Controllers 00:14:57.106 Size (in LBAs): 131072 (0GiB) 00:14:57.106 Capacity (in LBAs): 131072 (0GiB) 00:14:57.106 Utilization (in LBAs): 131072 (0GiB) 00:14:57.106 NGUID: 9226604F4FE54A1DACD63ACED93BF876 00:14:57.107 UUID: 9226604f-4fe5-4a1d-acd6-3aced93bf876 00:14:57.107 Thin Provisioning: Not Supported 00:14:57.107 Per-NS Atomic Units: Yes 00:14:57.107 Atomic Boundary Size (Normal): 0 00:14:57.107 Atomic Boundary Size (PFail): 0 00:14:57.107 Atomic Boundary Offset: 0 00:14:57.107 Maximum Single Source Range Length: 65535 00:14:57.107 Maximum Copy Length: 65535 00:14:57.107 Maximum Source Range Count: 1 00:14:57.107 NGUID/EUI64 Never Reused: No 00:14:57.107 Namespace Write Protected: No 00:14:57.107 Number of LBA Formats: 1 00:14:57.107 Current LBA Format: LBA Format #00 00:14:57.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:57.107 00:14:57.107 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:57.368 [2024-11-26 19:05:14.312855] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.657 Initializing NVMe Controllers 00:15:02.657 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.657 Initialization complete. Launching workers. 00:15:02.657 ======================================================== 00:15:02.657 Latency(us) 00:15:02.657 Device Information : IOPS MiB/s Average min max 00:15:02.657 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40007.60 156.28 3199.26 851.01 6906.58 00:15:02.657 ======================================================== 00:15:02.657 Total : 40007.60 156.28 3199.26 851.01 6906.58 00:15:02.657 00:15:02.657 [2024-11-26 19:05:19.332429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.657 19:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:02.657 [2024-11-26 19:05:19.528276] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.941 Initializing NVMe Controllers 00:15:07.941 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.941 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.941 Initialization complete. Launching workers. 00:15:07.941 ======================================================== 00:15:07.941 Latency(us) 00:15:07.941 Device Information : IOPS MiB/s Average min max 00:15:07.941 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.11 62.73 7976.09 6215.67 8749.01 00:15:07.941 ======================================================== 00:15:07.941 Total : 16059.11 62.73 7976.09 6215.67 8749.01 00:15:07.941 00:15:07.941 [2024-11-26 19:05:24.569407] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.941 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:07.941 [2024-11-26 19:05:24.773252] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.238 [2024-11-26 19:05:29.869480] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.238 Initializing NVMe Controllers 00:15:13.238 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:13.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:13.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:13.238 Initialization complete. Launching workers. 00:15:13.238 Starting thread on core 2 00:15:13.238 Starting thread on core 3 00:15:13.238 Starting thread on core 1 00:15:13.238 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:13.238 [2024-11-26 19:05:30.113524] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.537 [2024-11-26 19:05:33.173874] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.537 Initializing NVMe Controllers 00:15:16.537 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.537 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.537 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:16.537 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:16.537 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:16.537 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:16.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:16.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:16.537 Initialization complete. Launching workers. 00:15:16.537 Starting thread on core 1 with urgent priority queue 00:15:16.537 Starting thread on core 2 with urgent priority queue 00:15:16.537 Starting thread on core 3 with urgent priority queue 00:15:16.537 Starting thread on core 0 with urgent priority queue 00:15:16.537 SPDK bdev Controller (SPDK1 ) core 0: 11647.00 IO/s 8.59 secs/100000 ios 00:15:16.537 SPDK bdev Controller (SPDK1 ) core 1: 10252.00 IO/s 9.75 secs/100000 ios 00:15:16.537 SPDK bdev Controller (SPDK1 ) core 2: 11295.00 IO/s 8.85 secs/100000 ios 00:15:16.537 SPDK bdev Controller (SPDK1 ) core 3: 10005.67 IO/s 9.99 secs/100000 ios 00:15:16.537 ======================================================== 00:15:16.537 00:15:16.537 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.537 [2024-11-26 19:05:33.412570] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.537 Initializing NVMe Controllers 00:15:16.537 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.537 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:16.537 Namespace ID: 1 size: 0GB 00:15:16.537 Initialization complete. 00:15:16.537 INFO: using host memory buffer for IO 00:15:16.537 Hello world! 00:15:16.537 [2024-11-26 19:05:33.446789] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.537 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:16.537 [2024-11-26 19:05:33.681411] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.925 Initializing NVMe Controllers 00:15:17.925 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.925 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.925 Initialization complete. Launching workers. 00:15:17.925 submit (in ns) avg, min, max = 6324.8, 2831.7, 3998454.2 00:15:17.925 complete (in ns) avg, min, max = 17449.1, 1625.8, 4044422.5 00:15:17.925 00:15:17.925 Submit histogram 00:15:17.925 ================ 00:15:17.925 Range in us Cumulative Count 00:15:17.925 2.827 - 2.840: 0.0738% ( 15) 00:15:17.925 2.840 - 2.853: 0.9157% ( 171) 00:15:17.925 2.853 - 2.867: 3.1655% ( 457) 00:15:17.925 2.867 - 2.880: 7.2072% ( 821) 00:15:17.925 2.880 - 2.893: 12.2335% ( 1021) 00:15:17.925 2.893 - 2.907: 18.0722% ( 1186) 00:15:17.925 2.907 - 2.920: 24.0093% ( 1206) 00:15:17.925 2.920 - 2.933: 29.8183% ( 1180) 00:15:17.925 2.933 - 2.947: 35.9425% ( 1244) 00:15:17.925 2.947 - 2.960: 41.1214% ( 1052) 00:15:17.925 2.960 - 2.973: 46.6352% ( 1120) 00:15:17.925 2.973 - 2.987: 52.2769% ( 1146) 00:15:17.925 2.987 - 3.000: 60.7394% ( 1719) 00:15:17.925 3.000 - 3.013: 69.6795% ( 1816) 00:15:17.925 3.013 - 3.027: 77.7285% ( 1635) 00:15:17.925 3.027 - 3.040: 84.6059% ( 1397) 00:15:17.925 3.040 - 3.053: 90.9418% ( 1287) 00:15:17.925 3.053 - 3.067: 94.7571% ( 775) 00:15:17.925 3.067 - 3.080: 97.1398% ( 484) 00:15:17.925 3.080 - 3.093: 98.3607% ( 248) 00:15:17.925 3.093 - 3.107: 99.0302% ( 136) 00:15:17.925 3.107 - 3.120: 99.3797% ( 71) 00:15:17.925 3.120 - 3.133: 99.4782% ( 20) 00:15:17.925 3.133 - 3.147: 99.5274% ( 10) 00:15:17.925 3.147 - 3.160: 99.5668% ( 8) 00:15:17.925 3.200 - 3.213: 99.5717% ( 1) 00:15:17.925 3.227 - 3.240: 99.5766% ( 1) 00:15:17.925 3.240 - 3.253: 99.5815% ( 1) 00:15:17.925 3.387 - 3.400: 99.5865% ( 1) 00:15:17.925 3.413 - 3.440: 99.5914% ( 1) 00:15:17.925 3.440 - 3.467: 99.6012% ( 2) 00:15:17.925 3.493 - 3.520: 99.6062% ( 1) 00:15:17.925 3.653 - 3.680: 99.6111% ( 1) 00:15:17.925 3.813 - 3.840: 99.6160% ( 1) 00:15:17.925 4.000 - 4.027: 99.6209% ( 1) 00:15:17.925 4.187 - 4.213: 99.6259% ( 1) 00:15:17.925 4.293 - 4.320: 99.6308% ( 1) 00:15:17.925 4.347 - 4.373: 99.6357% ( 1) 00:15:17.925 4.560 - 4.587: 99.6406% ( 1) 00:15:17.925 4.587 - 4.613: 99.6455% ( 1) 00:15:17.925 4.667 - 4.693: 99.6505% ( 1) 00:15:17.925 4.747 - 4.773: 99.6554% ( 1) 00:15:17.925 4.827 - 4.853: 99.6603% ( 1) 00:15:17.926 4.853 - 4.880: 99.6652% ( 1) 00:15:17.926 4.933 - 4.960: 99.6702% ( 1) 00:15:17.926 4.960 - 4.987: 99.6751% ( 1) 00:15:17.926 5.040 - 5.067: 99.6800% ( 1) 00:15:17.926 5.067 - 5.093: 99.6849% ( 1) 00:15:17.926 5.120 - 5.147: 99.6997% ( 3) 00:15:17.926 5.173 - 5.200: 99.7095% ( 2) 00:15:17.926 5.227 - 5.253: 99.7145% ( 1) 00:15:17.926 5.413 - 5.440: 99.7194% ( 1) 00:15:17.926 5.440 - 5.467: 99.7243% ( 1) 00:15:17.926 5.467 - 5.493: 99.7292% ( 1) 00:15:17.926 5.520 - 5.547: 99.7342% ( 1) 00:15:17.926 5.600 - 5.627: 99.7391% ( 1) 00:15:17.926 5.627 - 5.653: 99.7489% ( 2) 00:15:17.926 5.653 - 5.680: 99.7588% ( 2) 00:15:17.926 5.680 - 5.707: 99.7735% ( 3) 00:15:17.926 5.707 - 5.733: 99.7785% ( 1) 00:15:17.926 5.733 - 5.760: 99.7834% ( 1) 00:15:17.926 5.787 - 5.813: 99.7883% ( 1) 00:15:17.926 5.813 - 5.840: 99.7932% ( 1) 00:15:17.926 5.840 - 5.867: 99.7982% ( 1) 00:15:17.926 5.893 - 5.920: 99.8080% ( 2) 00:15:17.926 5.920 - 5.947: 99.8129% ( 1) 00:15:17.926 5.947 - 5.973: 99.8228% ( 2) 00:15:17.926 5.973 - 6.000: 99.8277% ( 1) 00:15:17.926 6.080 - 6.107: 99.8326% ( 1) 00:15:17.926 6.187 - 6.213: 99.8375% ( 1) 00:15:17.926 6.213 - 6.240: 99.8425% ( 1) 00:15:17.926 6.267 - 6.293: 99.8474% ( 1) 00:15:17.926 6.293 - 6.320: 99.8572% ( 2) 00:15:17.926 6.320 - 6.347: 99.8622% ( 1) 00:15:17.926 6.347 - 6.373: 99.8720% ( 2) 00:15:17.926 6.400 - 6.427: 99.8769% ( 1) 00:15:17.926 6.480 - 6.507: 99.8818% ( 1) 00:15:17.926 6.560 - 6.587: 99.8868% ( 1) 00:15:17.926 6.640 - 6.667: 99.8917% ( 1) 00:15:17.926 [2024-11-26 19:05:34.702981] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.926 6.667 - 6.693: 99.8966% ( 1) 00:15:17.926 6.827 - 6.880: 99.9015% ( 1) 00:15:17.926 6.880 - 6.933: 99.9065% ( 1) 00:15:17.926 6.933 - 6.987: 99.9114% ( 1) 00:15:17.926 11.093 - 11.147: 99.9163% ( 1) 00:15:17.926 3986.773 - 4014.080: 100.0000% ( 17) 00:15:17.926 00:15:17.926 Complete histogram 00:15:17.926 ================== 00:15:17.926 Range in us Cumulative Count 00:15:17.926 1.620 - 1.627: 0.0049% ( 1) 00:15:17.926 1.633 - 1.640: 0.1969% ( 39) 00:15:17.926 1.640 - 1.647: 0.9846% ( 160) 00:15:17.926 1.647 - 1.653: 1.0732% ( 18) 00:15:17.926 1.653 - 1.660: 1.1618% ( 18) 00:15:17.926 1.660 - 1.667: 1.2947% ( 27) 00:15:17.926 1.667 - 1.673: 1.3194% ( 5) 00:15:17.926 1.673 - 1.680: 1.3292% ( 2) 00:15:17.926 1.680 - 1.687: 2.7076% ( 280) 00:15:17.926 1.687 - 1.693: 46.0247% ( 8799) 00:15:17.926 1.693 - 1.700: 58.6866% ( 2572) 00:15:17.926 1.700 - 1.707: 64.9535% ( 1273) 00:15:17.926 1.707 - 1.720: 80.6971% ( 3198) 00:15:17.926 1.720 - 1.733: 83.4441% ( 558) 00:15:17.926 1.733 - 1.747: 84.2121% ( 156) 00:15:17.926 1.747 - 1.760: 89.6372% ( 1102) 00:15:17.926 1.760 - 1.773: 94.8161% ( 1052) 00:15:17.926 1.773 - 1.787: 97.8241% ( 611) 00:15:17.926 1.787 - 1.800: 99.0991% ( 259) 00:15:17.926 1.800 - 1.813: 99.3551% ( 52) 00:15:17.926 1.813 - 1.827: 99.4289% ( 15) 00:15:17.926 1.827 - 1.840: 99.4437% ( 3) 00:15:17.926 1.907 - 1.920: 99.4486% ( 1) 00:15:17.926 3.440 - 3.467: 99.4536% ( 1) 00:15:17.926 3.787 - 3.813: 99.4585% ( 1) 00:15:17.926 3.893 - 3.920: 99.4634% ( 1) 00:15:17.926 3.973 - 4.000: 99.4683% ( 1) 00:15:17.926 4.000 - 4.027: 99.4732% ( 1) 00:15:17.926 4.080 - 4.107: 99.4782% ( 1) 00:15:17.926 4.107 - 4.133: 99.4831% ( 1) 00:15:17.926 4.133 - 4.160: 99.4880% ( 1) 00:15:17.926 4.240 - 4.267: 99.4929% ( 1) 00:15:17.926 4.293 - 4.320: 99.4979% ( 1) 00:15:17.926 4.320 - 4.347: 99.5028% ( 1) 00:15:17.926 4.347 - 4.373: 99.5126% ( 2) 00:15:17.926 4.373 - 4.400: 99.5176% ( 1) 00:15:17.926 4.693 - 4.720: 99.5225% ( 1) 00:15:17.926 4.720 - 4.747: 99.5274% ( 1) 00:15:17.926 4.747 - 4.773: 99.5323% ( 1) 00:15:17.926 4.853 - 4.880: 99.5422% ( 2) 00:15:17.926 4.933 - 4.960: 99.5471% ( 1) 00:15:17.926 4.960 - 4.987: 99.5520% ( 1) 00:15:17.926 5.013 - 5.040: 99.5569% ( 1) 00:15:17.926 5.040 - 5.067: 99.5668% ( 2) 00:15:17.926 5.387 - 5.413: 99.5717% ( 1) 00:15:17.926 5.413 - 5.440: 99.5766% ( 1) 00:15:17.926 6.453 - 6.480: 99.5815% ( 1) 00:15:17.926 8.853 - 8.907: 99.5865% ( 1) 00:15:17.926 9.600 - 9.653: 99.5914% ( 1) 00:15:17.926 10.293 - 10.347: 99.5963% ( 1) 00:15:17.926 10.720 - 10.773: 99.6012% ( 1) 00:15:17.926 11.680 - 11.733: 99.6062% ( 1) 00:15:17.926 3986.773 - 4014.080: 99.9902% ( 78) 00:15:17.926 4014.080 - 4041.387: 99.9951% ( 1) 00:15:17.926 4041.387 - 4068.693: 100.0000% ( 1) 00:15:17.926 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.926 [ 00:15:17.926 { 00:15:17.926 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.926 "subtype": "Discovery", 00:15:17.926 "listen_addresses": [], 00:15:17.926 "allow_any_host": true, 00:15:17.926 "hosts": [] 00:15:17.926 }, 00:15:17.926 { 00:15:17.926 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.926 "subtype": "NVMe", 00:15:17.926 "listen_addresses": [ 00:15:17.926 { 00:15:17.926 "trtype": "VFIOUSER", 00:15:17.926 "adrfam": "IPv4", 00:15:17.926 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.926 "trsvcid": "0" 00:15:17.926 } 00:15:17.926 ], 00:15:17.926 "allow_any_host": true, 00:15:17.926 "hosts": [], 00:15:17.926 "serial_number": "SPDK1", 00:15:17.926 "model_number": "SPDK bdev Controller", 00:15:17.926 "max_namespaces": 32, 00:15:17.926 "min_cntlid": 1, 00:15:17.926 "max_cntlid": 65519, 00:15:17.926 "namespaces": [ 00:15:17.926 { 00:15:17.926 "nsid": 1, 00:15:17.926 "bdev_name": "Malloc1", 00:15:17.926 "name": "Malloc1", 00:15:17.926 "nguid": "9226604F4FE54A1DACD63ACED93BF876", 00:15:17.926 "uuid": "9226604f-4fe5-4a1d-acd6-3aced93bf876" 00:15:17.926 } 00:15:17.926 ] 00:15:17.926 }, 00:15:17.926 { 00:15:17.926 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.926 "subtype": "NVMe", 00:15:17.926 "listen_addresses": [ 00:15:17.926 { 00:15:17.926 "trtype": "VFIOUSER", 00:15:17.926 "adrfam": "IPv4", 00:15:17.926 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.926 "trsvcid": "0" 00:15:17.926 } 00:15:17.926 ], 00:15:17.926 "allow_any_host": true, 00:15:17.926 "hosts": [], 00:15:17.926 "serial_number": "SPDK2", 00:15:17.926 "model_number": "SPDK bdev Controller", 00:15:17.926 "max_namespaces": 32, 00:15:17.926 "min_cntlid": 1, 00:15:17.926 "max_cntlid": 65519, 00:15:17.926 "namespaces": [ 00:15:17.926 { 00:15:17.926 "nsid": 1, 00:15:17.926 "bdev_name": "Malloc2", 00:15:17.926 "name": "Malloc2", 00:15:17.926 "nguid": "C8AC019F480B413CB0FDF81104D5D643", 00:15:17.926 "uuid": "c8ac019f-480b-413c-b0fd-f81104d5d643" 00:15:17.926 } 00:15:17.926 ] 00:15:17.926 } 00:15:17.926 ] 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2895159 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:17.926 19:05:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:17.926 [2024-11-26 19:05:35.068849] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.926 Malloc3 00:15:17.926 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:18.224 [2024-11-26 19:05:35.286406] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.224 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.224 Asynchronous Event Request test 00:15:18.224 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.224 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.224 Registering asynchronous event callbacks... 00:15:18.224 Starting namespace attribute notice tests for all controllers... 00:15:18.224 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:18.224 aer_cb - Changed Namespace 00:15:18.224 Cleaning up... 00:15:18.553 [ 00:15:18.553 { 00:15:18.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.553 "subtype": "Discovery", 00:15:18.553 "listen_addresses": [], 00:15:18.553 "allow_any_host": true, 00:15:18.553 "hosts": [] 00:15:18.553 }, 00:15:18.553 { 00:15:18.553 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.553 "subtype": "NVMe", 00:15:18.553 "listen_addresses": [ 00:15:18.553 { 00:15:18.553 "trtype": "VFIOUSER", 00:15:18.553 "adrfam": "IPv4", 00:15:18.553 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.553 "trsvcid": "0" 00:15:18.553 } 00:15:18.553 ], 00:15:18.553 "allow_any_host": true, 00:15:18.553 "hosts": [], 00:15:18.553 "serial_number": "SPDK1", 00:15:18.553 "model_number": "SPDK bdev Controller", 00:15:18.553 "max_namespaces": 32, 00:15:18.553 "min_cntlid": 1, 00:15:18.553 "max_cntlid": 65519, 00:15:18.553 "namespaces": [ 00:15:18.553 { 00:15:18.553 "nsid": 1, 00:15:18.553 "bdev_name": "Malloc1", 00:15:18.553 "name": "Malloc1", 00:15:18.553 "nguid": "9226604F4FE54A1DACD63ACED93BF876", 00:15:18.553 "uuid": "9226604f-4fe5-4a1d-acd6-3aced93bf876" 00:15:18.553 }, 00:15:18.553 { 00:15:18.553 "nsid": 2, 00:15:18.553 "bdev_name": "Malloc3", 00:15:18.553 "name": "Malloc3", 00:15:18.553 "nguid": "B3F7803F5CC5467F9AE09657CFFC2DF0", 00:15:18.553 "uuid": "b3f7803f-5cc5-467f-9ae0-9657cffc2df0" 00:15:18.553 } 00:15:18.553 ] 00:15:18.553 }, 00:15:18.553 { 00:15:18.553 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.553 "subtype": "NVMe", 00:15:18.553 "listen_addresses": [ 00:15:18.553 { 00:15:18.553 "trtype": "VFIOUSER", 00:15:18.553 "adrfam": "IPv4", 00:15:18.553 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.553 "trsvcid": "0" 00:15:18.553 } 00:15:18.553 ], 00:15:18.553 "allow_any_host": true, 00:15:18.553 "hosts": [], 00:15:18.553 "serial_number": "SPDK2", 00:15:18.553 "model_number": "SPDK bdev Controller", 00:15:18.553 "max_namespaces": 32, 00:15:18.553 "min_cntlid": 1, 00:15:18.553 "max_cntlid": 65519, 00:15:18.553 "namespaces": [ 00:15:18.553 { 00:15:18.553 "nsid": 1, 00:15:18.553 "bdev_name": "Malloc2", 00:15:18.553 "name": "Malloc2", 00:15:18.553 "nguid": "C8AC019F480B413CB0FDF81104D5D643", 00:15:18.553 "uuid": "c8ac019f-480b-413c-b0fd-f81104d5d643" 00:15:18.553 } 00:15:18.553 ] 00:15:18.553 } 00:15:18.553 ] 00:15:18.553 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2895159 00:15:18.553 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.553 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:18.553 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:18.553 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:18.553 [2024-11-26 19:05:35.516029] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:15:18.553 [2024-11-26 19:05:35.516075] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895180 ] 00:15:18.553 [2024-11-26 19:05:35.556358] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:18.553 [2024-11-26 19:05:35.563341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.553 [2024-11-26 19:05:35.563358] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1563301000 00:15:18.553 [2024-11-26 19:05:35.564347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.553 [2024-11-26 19:05:35.565351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.553 [2024-11-26 19:05:35.566357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.553 [2024-11-26 19:05:35.567362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.553 [2024-11-26 19:05:35.568368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.553 [2024-11-26 19:05:35.569376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.554 [2024-11-26 19:05:35.570383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:18.554 [2024-11-26 19:05:35.571393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:18.554 [2024-11-26 19:05:35.572406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:18.554 [2024-11-26 19:05:35.572413] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f15632f6000 00:15:18.554 [2024-11-26 19:05:35.573324] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.554 [2024-11-26 19:05:35.585432] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:18.554 [2024-11-26 19:05:35.585452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:18.554 [2024-11-26 19:05:35.590519] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.554 [2024-11-26 19:05:35.590551] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:18.554 [2024-11-26 19:05:35.590612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:18.554 [2024-11-26 19:05:35.590622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:18.554 [2024-11-26 19:05:35.590625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:18.554 [2024-11-26 19:05:35.591524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:18.554 [2024-11-26 19:05:35.591533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:18.554 [2024-11-26 19:05:35.591538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:18.554 [2024-11-26 19:05:35.592532] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:18.554 [2024-11-26 19:05:35.592540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:18.554 [2024-11-26 19:05:35.592545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.593534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:18.554 [2024-11-26 19:05:35.593541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.594538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:18.554 [2024-11-26 19:05:35.594547] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:18.554 [2024-11-26 19:05:35.594551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.594556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.594662] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:18.554 [2024-11-26 19:05:35.594665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.594669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:18.554 [2024-11-26 19:05:35.595542] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:18.554 [2024-11-26 19:05:35.596544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:18.554 [2024-11-26 19:05:35.597549] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.554 [2024-11-26 19:05:35.598554] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.554 [2024-11-26 19:05:35.598585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:18.554 [2024-11-26 19:05:35.599560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:18.554 [2024-11-26 19:05:35.599567] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:18.554 [2024-11-26 19:05:35.599571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.599585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:18.554 [2024-11-26 19:05:35.599591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.599602] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.554 [2024-11-26 19:05:35.599606] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.554 [2024-11-26 19:05:35.599608] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.554 [2024-11-26 19:05:35.599618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.554 [2024-11-26 19:05:35.607165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:18.554 [2024-11-26 19:05:35.607174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:18.554 [2024-11-26 19:05:35.607177] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:18.554 [2024-11-26 19:05:35.607180] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:18.554 [2024-11-26 19:05:35.607184] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:18.554 [2024-11-26 19:05:35.607189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:18.554 [2024-11-26 19:05:35.607192] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:18.554 [2024-11-26 19:05:35.607196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.607202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.607209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:18.554 [2024-11-26 19:05:35.615162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:18.554 [2024-11-26 19:05:35.615171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.554 [2024-11-26 19:05:35.615178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.554 [2024-11-26 19:05:35.615184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.554 [2024-11-26 19:05:35.615190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:18.554 [2024-11-26 19:05:35.615193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.615200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.615206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:18.554 [2024-11-26 19:05:35.623162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:18.554 [2024-11-26 19:05:35.623169] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:18.554 [2024-11-26 19:05:35.623173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.623179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.623184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.623190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.554 [2024-11-26 19:05:35.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:18.554 [2024-11-26 19:05:35.631213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.631219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.631224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:18.554 [2024-11-26 19:05:35.631228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:18.554 [2024-11-26 19:05:35.631230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.554 [2024-11-26 19:05:35.631237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:18.554 [2024-11-26 19:05:35.639162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:18.554 [2024-11-26 19:05:35.639172] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:18.554 [2024-11-26 19:05:35.639182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.639187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.554 [2024-11-26 19:05:35.639192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.554 [2024-11-26 19:05:35.639195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.554 [2024-11-26 19:05:35.639198] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.554 [2024-11-26 19:05:35.639202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.646164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.646174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.646180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.646185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.555 [2024-11-26 19:05:35.646188] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.555 [2024-11-26 19:05:35.646190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.555 [2024-11-26 19:05:35.646195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.655164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.655175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655200] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.555 [2024-11-26 19:05:35.655204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:18.555 [2024-11-26 19:05:35.655209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:18.555 [2024-11-26 19:05:35.655222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.663163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.663174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.671164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.671175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.679164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.679175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.687164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.687177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:18.555 [2024-11-26 19:05:35.687180] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:18.555 [2024-11-26 19:05:35.687183] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:18.555 [2024-11-26 19:05:35.687185] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:18.555 [2024-11-26 19:05:35.687188] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:18.555 [2024-11-26 19:05:35.687193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:18.555 [2024-11-26 19:05:35.687198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:18.555 [2024-11-26 19:05:35.687201] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:18.555 [2024-11-26 19:05:35.687203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.555 [2024-11-26 19:05:35.687208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.687213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:18.555 [2024-11-26 19:05:35.687216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.555 [2024-11-26 19:05:35.687218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.555 [2024-11-26 19:05:35.687223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.687229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:18.555 [2024-11-26 19:05:35.687232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:18.555 [2024-11-26 19:05:35.687234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.555 [2024-11-26 19:05:35.687238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:18.555 [2024-11-26 19:05:35.695164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.695184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:18.555 [2024-11-26 19:05:35.695189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:18.555 ===================================================== 00:15:18.555 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:18.555 ===================================================== 00:15:18.555 Controller Capabilities/Features 00:15:18.555 ================================ 00:15:18.555 Vendor ID: 4e58 00:15:18.555 Subsystem Vendor ID: 4e58 00:15:18.555 Serial Number: SPDK2 00:15:18.555 Model Number: SPDK bdev Controller 00:15:18.555 Firmware Version: 25.01 00:15:18.555 Recommended Arb Burst: 6 00:15:18.555 IEEE OUI Identifier: 8d 6b 50 00:15:18.555 Multi-path I/O 00:15:18.555 May have multiple subsystem ports: Yes 00:15:18.555 May have multiple controllers: Yes 00:15:18.555 Associated with SR-IOV VF: No 00:15:18.555 Max Data Transfer Size: 131072 00:15:18.555 Max Number of Namespaces: 32 00:15:18.555 Max Number of I/O Queues: 127 00:15:18.555 NVMe Specification Version (VS): 1.3 00:15:18.555 NVMe Specification Version (Identify): 1.3 00:15:18.555 Maximum Queue Entries: 256 00:15:18.555 Contiguous Queues Required: Yes 00:15:18.555 Arbitration Mechanisms Supported 00:15:18.555 Weighted Round Robin: Not Supported 00:15:18.555 Vendor Specific: Not Supported 00:15:18.555 Reset Timeout: 15000 ms 00:15:18.555 Doorbell Stride: 4 bytes 00:15:18.555 NVM Subsystem Reset: Not Supported 00:15:18.555 Command Sets Supported 00:15:18.555 NVM Command Set: Supported 00:15:18.555 Boot Partition: Not Supported 00:15:18.555 Memory Page Size Minimum: 4096 bytes 00:15:18.555 Memory Page Size Maximum: 4096 bytes 00:15:18.555 Persistent Memory Region: Not Supported 00:15:18.555 Optional Asynchronous Events Supported 00:15:18.555 Namespace Attribute Notices: Supported 00:15:18.555 Firmware Activation Notices: Not Supported 00:15:18.555 ANA Change Notices: Not Supported 00:15:18.555 PLE Aggregate Log Change Notices: Not Supported 00:15:18.555 LBA Status Info Alert Notices: Not Supported 00:15:18.555 EGE Aggregate Log Change Notices: Not Supported 00:15:18.555 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.555 Zone Descriptor Change Notices: Not Supported 00:15:18.555 Discovery Log Change Notices: Not Supported 00:15:18.555 Controller Attributes 00:15:18.555 128-bit Host Identifier: Supported 00:15:18.555 Non-Operational Permissive Mode: Not Supported 00:15:18.555 NVM Sets: Not Supported 00:15:18.555 Read Recovery Levels: Not Supported 00:15:18.555 Endurance Groups: Not Supported 00:15:18.555 Predictable Latency Mode: Not Supported 00:15:18.555 Traffic Based Keep ALive: Not Supported 00:15:18.555 Namespace Granularity: Not Supported 00:15:18.555 SQ Associations: Not Supported 00:15:18.555 UUID List: Not Supported 00:15:18.555 Multi-Domain Subsystem: Not Supported 00:15:18.555 Fixed Capacity Management: Not Supported 00:15:18.555 Variable Capacity Management: Not Supported 00:15:18.555 Delete Endurance Group: Not Supported 00:15:18.555 Delete NVM Set: Not Supported 00:15:18.555 Extended LBA Formats Supported: Not Supported 00:15:18.555 Flexible Data Placement Supported: Not Supported 00:15:18.555 00:15:18.555 Controller Memory Buffer Support 00:15:18.555 ================================ 00:15:18.555 Supported: No 00:15:18.555 00:15:18.555 Persistent Memory Region Support 00:15:18.555 ================================ 00:15:18.555 Supported: No 00:15:18.555 00:15:18.555 Admin Command Set Attributes 00:15:18.555 ============================ 00:15:18.555 Security Send/Receive: Not Supported 00:15:18.555 Format NVM: Not Supported 00:15:18.555 Firmware Activate/Download: Not Supported 00:15:18.555 Namespace Management: Not Supported 00:15:18.555 Device Self-Test: Not Supported 00:15:18.555 Directives: Not Supported 00:15:18.555 NVMe-MI: Not Supported 00:15:18.555 Virtualization Management: Not Supported 00:15:18.555 Doorbell Buffer Config: Not Supported 00:15:18.556 Get LBA Status Capability: Not Supported 00:15:18.556 Command & Feature Lockdown Capability: Not Supported 00:15:18.556 Abort Command Limit: 4 00:15:18.556 Async Event Request Limit: 4 00:15:18.556 Number of Firmware Slots: N/A 00:15:18.556 Firmware Slot 1 Read-Only: N/A 00:15:18.556 Firmware Activation Without Reset: N/A 00:15:18.556 Multiple Update Detection Support: N/A 00:15:18.556 Firmware Update Granularity: No Information Provided 00:15:18.556 Per-Namespace SMART Log: No 00:15:18.556 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.556 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:18.556 Command Effects Log Page: Supported 00:15:18.556 Get Log Page Extended Data: Supported 00:15:18.556 Telemetry Log Pages: Not Supported 00:15:18.556 Persistent Event Log Pages: Not Supported 00:15:18.556 Supported Log Pages Log Page: May Support 00:15:18.556 Commands Supported & Effects Log Page: Not Supported 00:15:18.556 Feature Identifiers & Effects Log Page:May Support 00:15:18.556 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.556 Data Area 4 for Telemetry Log: Not Supported 00:15:18.556 Error Log Page Entries Supported: 128 00:15:18.556 Keep Alive: Supported 00:15:18.556 Keep Alive Granularity: 10000 ms 00:15:18.556 00:15:18.556 NVM Command Set Attributes 00:15:18.556 ========================== 00:15:18.556 Submission Queue Entry Size 00:15:18.556 Max: 64 00:15:18.556 Min: 64 00:15:18.556 Completion Queue Entry Size 00:15:18.556 Max: 16 00:15:18.556 Min: 16 00:15:18.556 Number of Namespaces: 32 00:15:18.556 Compare Command: Supported 00:15:18.556 Write Uncorrectable Command: Not Supported 00:15:18.556 Dataset Management Command: Supported 00:15:18.556 Write Zeroes Command: Supported 00:15:18.556 Set Features Save Field: Not Supported 00:15:18.556 Reservations: Not Supported 00:15:18.556 Timestamp: Not Supported 00:15:18.556 Copy: Supported 00:15:18.556 Volatile Write Cache: Present 00:15:18.556 Atomic Write Unit (Normal): 1 00:15:18.556 Atomic Write Unit (PFail): 1 00:15:18.556 Atomic Compare & Write Unit: 1 00:15:18.556 Fused Compare & Write: Supported 00:15:18.556 Scatter-Gather List 00:15:18.556 SGL Command Set: Supported (Dword aligned) 00:15:18.556 SGL Keyed: Not Supported 00:15:18.556 SGL Bit Bucket Descriptor: Not Supported 00:15:18.556 SGL Metadata Pointer: Not Supported 00:15:18.556 Oversized SGL: Not Supported 00:15:18.556 SGL Metadata Address: Not Supported 00:15:18.556 SGL Offset: Not Supported 00:15:18.556 Transport SGL Data Block: Not Supported 00:15:18.556 Replay Protected Memory Block: Not Supported 00:15:18.556 00:15:18.556 Firmware Slot Information 00:15:18.556 ========================= 00:15:18.556 Active slot: 1 00:15:18.556 Slot 1 Firmware Revision: 25.01 00:15:18.556 00:15:18.556 00:15:18.556 Commands Supported and Effects 00:15:18.556 ============================== 00:15:18.556 Admin Commands 00:15:18.556 -------------- 00:15:18.556 Get Log Page (02h): Supported 00:15:18.556 Identify (06h): Supported 00:15:18.556 Abort (08h): Supported 00:15:18.556 Set Features (09h): Supported 00:15:18.556 Get Features (0Ah): Supported 00:15:18.556 Asynchronous Event Request (0Ch): Supported 00:15:18.556 Keep Alive (18h): Supported 00:15:18.556 I/O Commands 00:15:18.556 ------------ 00:15:18.556 Flush (00h): Supported LBA-Change 00:15:18.556 Write (01h): Supported LBA-Change 00:15:18.556 Read (02h): Supported 00:15:18.556 Compare (05h): Supported 00:15:18.556 Write Zeroes (08h): Supported LBA-Change 00:15:18.556 Dataset Management (09h): Supported LBA-Change 00:15:18.556 Copy (19h): Supported LBA-Change 00:15:18.556 00:15:18.556 Error Log 00:15:18.556 ========= 00:15:18.556 00:15:18.556 Arbitration 00:15:18.556 =========== 00:15:18.556 Arbitration Burst: 1 00:15:18.556 00:15:18.556 Power Management 00:15:18.556 ================ 00:15:18.556 Number of Power States: 1 00:15:18.556 Current Power State: Power State #0 00:15:18.556 Power State #0: 00:15:18.556 Max Power: 0.00 W 00:15:18.556 Non-Operational State: Operational 00:15:18.556 Entry Latency: Not Reported 00:15:18.556 Exit Latency: Not Reported 00:15:18.556 Relative Read Throughput: 0 00:15:18.556 Relative Read Latency: 0 00:15:18.556 Relative Write Throughput: 0 00:15:18.556 Relative Write Latency: 0 00:15:18.556 Idle Power: Not Reported 00:15:18.556 Active Power: Not Reported 00:15:18.556 Non-Operational Permissive Mode: Not Supported 00:15:18.556 00:15:18.556 Health Information 00:15:18.556 ================== 00:15:18.556 Critical Warnings: 00:15:18.556 Available Spare Space: OK 00:15:18.556 Temperature: OK 00:15:18.556 Device Reliability: OK 00:15:18.556 Read Only: No 00:15:18.556 Volatile Memory Backup: OK 00:15:18.556 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:18.556 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.556 Available Spare: 0% 00:15:18.556 Available Sp[2024-11-26 19:05:35.695259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:18.556 [2024-11-26 19:05:35.703164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:18.556 [2024-11-26 19:05:35.703189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:18.556 [2024-11-26 19:05:35.703196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.556 [2024-11-26 19:05:35.703201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.556 [2024-11-26 19:05:35.703205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.556 [2024-11-26 19:05:35.703210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.556 [2024-11-26 19:05:35.703239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:18.556 [2024-11-26 19:05:35.703247] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:18.556 [2024-11-26 19:05:35.704245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.556 [2024-11-26 19:05:35.704284] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:18.556 [2024-11-26 19:05:35.704290] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:18.556 [2024-11-26 19:05:35.705252] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:18.556 [2024-11-26 19:05:35.705261] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:18.556 [2024-11-26 19:05:35.705301] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:18.556 [2024-11-26 19:05:35.706272] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.556 are Threshold: 0% 00:15:18.556 Life Percentage Used: 0% 00:15:18.556 Data Units Read: 0 00:15:18.556 Data Units Written: 0 00:15:18.556 Host Read Commands: 0 00:15:18.556 Host Write Commands: 0 00:15:18.556 Controller Busy Time: 0 minutes 00:15:18.556 Power Cycles: 0 00:15:18.556 Power On Hours: 0 hours 00:15:18.556 Unsafe Shutdowns: 0 00:15:18.556 Unrecoverable Media Errors: 0 00:15:18.556 Lifetime Error Log Entries: 0 00:15:18.556 Warning Temperature Time: 0 minutes 00:15:18.556 Critical Temperature Time: 0 minutes 00:15:18.556 00:15:18.556 Number of Queues 00:15:18.556 ================ 00:15:18.556 Number of I/O Submission Queues: 127 00:15:18.556 Number of I/O Completion Queues: 127 00:15:18.556 00:15:18.556 Active Namespaces 00:15:18.556 ================= 00:15:18.556 Namespace ID:1 00:15:18.556 Error Recovery Timeout: Unlimited 00:15:18.556 Command Set Identifier: NVM (00h) 00:15:18.556 Deallocate: Supported 00:15:18.556 Deallocated/Unwritten Error: Not Supported 00:15:18.556 Deallocated Read Value: Unknown 00:15:18.556 Deallocate in Write Zeroes: Not Supported 00:15:18.556 Deallocated Guard Field: 0xFFFF 00:15:18.556 Flush: Supported 00:15:18.556 Reservation: Supported 00:15:18.556 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.556 Size (in LBAs): 131072 (0GiB) 00:15:18.556 Capacity (in LBAs): 131072 (0GiB) 00:15:18.556 Utilization (in LBAs): 131072 (0GiB) 00:15:18.556 NGUID: C8AC019F480B413CB0FDF81104D5D643 00:15:18.556 UUID: c8ac019f-480b-413c-b0fd-f81104d5d643 00:15:18.556 Thin Provisioning: Not Supported 00:15:18.556 Per-NS Atomic Units: Yes 00:15:18.556 Atomic Boundary Size (Normal): 0 00:15:18.556 Atomic Boundary Size (PFail): 0 00:15:18.556 Atomic Boundary Offset: 0 00:15:18.556 Maximum Single Source Range Length: 65535 00:15:18.556 Maximum Copy Length: 65535 00:15:18.556 Maximum Source Range Count: 1 00:15:18.556 NGUID/EUI64 Never Reused: No 00:15:18.556 Namespace Write Protected: No 00:15:18.557 Number of LBA Formats: 1 00:15:18.557 Current LBA Format: LBA Format #00 00:15:18.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.557 00:15:18.557 19:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:18.817 [2024-11-26 19:05:35.896535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.101 Initializing NVMe Controllers 00:15:24.101 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.101 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.101 Initialization complete. Launching workers. 00:15:24.101 ======================================================== 00:15:24.101 Latency(us) 00:15:24.101 Device Information : IOPS MiB/s Average min max 00:15:24.101 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40038.00 156.40 3199.31 846.39 8157.48 00:15:24.101 ======================================================== 00:15:24.101 Total : 40038.00 156.40 3199.31 846.39 8157.48 00:15:24.101 00:15:24.101 [2024-11-26 19:05:41.005358] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.101 19:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:24.101 [2024-11-26 19:05:41.206941] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.389 Initializing NVMe Controllers 00:15:29.389 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.389 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.389 Initialization complete. Launching workers. 00:15:29.389 ======================================================== 00:15:29.389 Latency(us) 00:15:29.389 Device Information : IOPS MiB/s Average min max 00:15:29.389 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40043.69 156.42 3196.38 856.73 6922.16 00:15:29.389 ======================================================== 00:15:29.389 Total : 40043.69 156.42 3196.38 856.73 6922.16 00:15:29.389 00:15:29.389 [2024-11-26 19:05:46.226726] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.389 19:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:29.389 [2024-11-26 19:05:46.437957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.676 [2024-11-26 19:05:51.567239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.676 Initializing NVMe Controllers 00:15:34.676 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.676 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:34.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:34.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:34.676 Initialization complete. Launching workers. 00:15:34.676 Starting thread on core 2 00:15:34.676 Starting thread on core 3 00:15:34.676 Starting thread on core 1 00:15:34.676 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:34.676 [2024-11-26 19:05:51.810501] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.972 [2024-11-26 19:05:54.888830] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.972 Initializing NVMe Controllers 00:15:37.972 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.972 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.972 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:37.972 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:37.972 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:37.972 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:37.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:37.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:37.972 Initialization complete. Launching workers. 00:15:37.972 Starting thread on core 1 with urgent priority queue 00:15:37.972 Starting thread on core 2 with urgent priority queue 00:15:37.972 Starting thread on core 3 with urgent priority queue 00:15:37.972 Starting thread on core 0 with urgent priority queue 00:15:37.972 SPDK bdev Controller (SPDK2 ) core 0: 10653.67 IO/s 9.39 secs/100000 ios 00:15:37.972 SPDK bdev Controller (SPDK2 ) core 1: 10569.00 IO/s 9.46 secs/100000 ios 00:15:37.972 SPDK bdev Controller (SPDK2 ) core 2: 7970.67 IO/s 12.55 secs/100000 ios 00:15:37.972 SPDK bdev Controller (SPDK2 ) core 3: 11081.00 IO/s 9.02 secs/100000 ios 00:15:37.972 ======================================================== 00:15:37.972 00:15:37.972 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:37.972 [2024-11-26 19:05:55.127550] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.972 Initializing NVMe Controllers 00:15:37.972 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.972 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.972 Namespace ID: 1 size: 0GB 00:15:37.972 Initialization complete. 00:15:37.972 INFO: using host memory buffer for IO 00:15:37.972 Hello world! 00:15:37.972 [2024-11-26 19:05:55.137609] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.972 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:38.234 [2024-11-26 19:05:55.373983] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.616 Initializing NVMe Controllers 00:15:39.616 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.616 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.616 Initialization complete. Launching workers. 00:15:39.616 submit (in ns) avg, min, max = 5136.9, 2847.5, 3997512.5 00:15:39.616 complete (in ns) avg, min, max = 16297.8, 1647.5, 3997631.7 00:15:39.616 00:15:39.616 Submit histogram 00:15:39.616 ================ 00:15:39.616 Range in us Cumulative Count 00:15:39.616 2.840 - 2.853: 0.0632% ( 13) 00:15:39.616 2.853 - 2.867: 0.7877% ( 149) 00:15:39.616 2.867 - 2.880: 2.4311% ( 338) 00:15:39.616 2.880 - 2.893: 5.7957% ( 692) 00:15:39.616 2.893 - 2.907: 10.6481% ( 998) 00:15:39.616 2.907 - 2.920: 17.3579% ( 1380) 00:15:39.616 2.920 - 2.933: 23.2363% ( 1209) 00:15:39.616 2.933 - 2.947: 28.4339% ( 1069) 00:15:39.616 2.947 - 2.960: 33.5294% ( 1048) 00:15:39.616 2.960 - 2.973: 38.3284% ( 987) 00:15:39.616 2.973 - 2.987: 44.0220% ( 1171) 00:15:39.616 2.987 - 3.000: 49.7204% ( 1172) 00:15:39.616 3.000 - 3.013: 56.4205% ( 1378) 00:15:39.616 3.013 - 3.027: 63.8547% ( 1529) 00:15:39.616 3.027 - 3.040: 72.5385% ( 1786) 00:15:39.616 3.040 - 3.053: 80.0506% ( 1545) 00:15:39.616 3.053 - 3.067: 86.9548% ( 1420) 00:15:39.616 3.067 - 3.080: 92.1282% ( 1064) 00:15:39.616 3.080 - 3.093: 96.2027% ( 838) 00:15:39.616 3.093 - 3.107: 98.0357% ( 377) 00:15:39.616 3.107 - 3.120: 98.8720% ( 172) 00:15:39.616 3.120 - 3.133: 99.1929% ( 66) 00:15:39.616 3.133 - 3.147: 99.3631% ( 35) 00:15:39.616 3.147 - 3.160: 99.4652% ( 21) 00:15:39.616 3.160 - 3.173: 99.5138% ( 10) 00:15:39.616 3.173 - 3.187: 99.5235% ( 2) 00:15:39.616 3.187 - 3.200: 99.5332% ( 2) 00:15:39.616 3.200 - 3.213: 99.5430% ( 2) 00:15:39.616 3.253 - 3.267: 99.5478% ( 1) 00:15:39.616 3.267 - 3.280: 99.5527% ( 1) 00:15:39.616 3.280 - 3.293: 99.5575% ( 1) 00:15:39.616 3.307 - 3.320: 99.5624% ( 1) 00:15:39.616 3.373 - 3.387: 99.5673% ( 1) 00:15:39.616 3.520 - 3.547: 99.5721% ( 1) 00:15:39.616 3.547 - 3.573: 99.5770% ( 1) 00:15:39.616 3.627 - 3.653: 99.5819% ( 1) 00:15:39.616 4.000 - 4.027: 99.5867% ( 1) 00:15:39.616 4.053 - 4.080: 99.5916% ( 1) 00:15:39.616 4.107 - 4.133: 99.5964% ( 1) 00:15:39.616 4.187 - 4.213: 99.6013% ( 1) 00:15:39.616 4.267 - 4.293: 99.6062% ( 1) 00:15:39.616 4.320 - 4.347: 99.6110% ( 1) 00:15:39.616 4.400 - 4.427: 99.6159% ( 1) 00:15:39.616 4.427 - 4.453: 99.6208% ( 1) 00:15:39.616 4.480 - 4.507: 99.6305% ( 2) 00:15:39.616 4.533 - 4.560: 99.6353% ( 1) 00:15:39.616 4.560 - 4.587: 99.6451% ( 2) 00:15:39.616 4.587 - 4.613: 99.6499% ( 1) 00:15:39.616 4.613 - 4.640: 99.6548% ( 1) 00:15:39.616 4.693 - 4.720: 99.6742% ( 4) 00:15:39.616 4.720 - 4.747: 99.6937% ( 4) 00:15:39.616 4.747 - 4.773: 99.7034% ( 2) 00:15:39.616 4.773 - 4.800: 99.7083% ( 1) 00:15:39.616 4.800 - 4.827: 99.7326% ( 5) 00:15:39.616 4.907 - 4.933: 99.7374% ( 1) 00:15:39.616 4.933 - 4.960: 99.7569% ( 4) 00:15:39.616 4.987 - 5.013: 99.7618% ( 1) 00:15:39.616 5.013 - 5.040: 99.7763% ( 3) 00:15:39.616 5.040 - 5.067: 99.7861% ( 2) 00:15:39.616 5.067 - 5.093: 99.7958% ( 2) 00:15:39.616 5.093 - 5.120: 99.8007% ( 1) 00:15:39.616 5.120 - 5.147: 99.8055% ( 1) 00:15:39.616 5.147 - 5.173: 99.8104% ( 1) 00:15:39.616 5.173 - 5.200: 99.8152% ( 1) 00:15:39.616 5.200 - 5.227: 99.8201% ( 1) 00:15:39.616 5.227 - 5.253: 99.8250% ( 1) 00:15:39.616 5.253 - 5.280: 99.8347% ( 2) 00:15:39.616 5.280 - 5.307: 99.8395% ( 1) 00:15:39.616 5.307 - 5.333: 99.8444% ( 1) 00:15:39.616 5.413 - 5.440: 99.8493% ( 1) 00:15:39.616 5.493 - 5.520: 99.8541% ( 1) 00:15:39.616 5.520 - 5.547: 99.8590% ( 1) 00:15:39.616 5.627 - 5.653: 99.8639% ( 1) 00:15:39.616 5.867 - 5.893: 99.8687% ( 1) 00:15:39.616 5.920 - 5.947: 99.8736% ( 1) 00:15:39.616 5.947 - 5.973: 99.8833% ( 2) 00:15:39.616 6.027 - 6.053: 99.8882% ( 1) 00:15:39.616 6.107 - 6.133: 99.8930% ( 1) 00:15:39.616 6.133 - 6.160: 99.8979% ( 1) 00:15:39.616 [2024-11-26 19:05:56.465679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.616 6.240 - 6.267: 99.9028% ( 1) 00:15:39.616 6.400 - 6.427: 99.9076% ( 1) 00:15:39.616 6.427 - 6.453: 99.9125% ( 1) 00:15:39.616 6.453 - 6.480: 99.9222% ( 2) 00:15:39.617 6.480 - 6.507: 99.9271% ( 1) 00:15:39.617 6.587 - 6.613: 99.9319% ( 1) 00:15:39.617 7.840 - 7.893: 99.9368% ( 1) 00:15:39.617 8.373 - 8.427: 99.9417% ( 1) 00:15:39.617 9.813 - 9.867: 99.9465% ( 1) 00:15:39.617 3986.773 - 4014.080: 100.0000% ( 11) 00:15:39.617 00:15:39.617 Complete histogram 00:15:39.617 ================== 00:15:39.617 Range in us Cumulative Count 00:15:39.617 1.647 - 1.653: 0.7682% ( 158) 00:15:39.617 1.653 - 1.660: 1.0988% ( 68) 00:15:39.617 1.660 - 1.667: 1.1377% ( 8) 00:15:39.617 1.667 - 1.673: 1.3322% ( 40) 00:15:39.617 1.673 - 1.680: 1.4246% ( 19) 00:15:39.617 1.680 - 1.687: 1.4538% ( 6) 00:15:39.617 1.687 - 1.693: 1.4975% ( 9) 00:15:39.617 1.693 - 1.700: 1.5073% ( 2) 00:15:39.617 1.700 - 1.707: 32.9460% ( 6466) 00:15:39.617 1.707 - 1.720: 59.7462% ( 5512) 00:15:39.617 1.720 - 1.733: 78.0085% ( 3756) 00:15:39.617 1.733 - 1.747: 83.4200% ( 1113) 00:15:39.617 1.747 - 1.760: 84.5918% ( 241) 00:15:39.617 1.760 - 1.773: 88.9143% ( 889) 00:15:39.617 1.773 - 1.787: 93.9272% ( 1031) 00:15:39.617 1.787 - 1.800: 97.4814% ( 731) 00:15:39.617 1.800 - 1.813: 98.9400% ( 300) 00:15:39.617 1.813 - 1.827: 99.4068% ( 96) 00:15:39.617 1.827 - 1.840: 99.4895% ( 17) 00:15:39.617 1.853 - 1.867: 99.4943% ( 1) 00:15:39.617 1.867 - 1.880: 99.4992% ( 1) 00:15:39.617 1.907 - 1.920: 99.5041% ( 1) 00:15:39.617 3.293 - 3.307: 99.5089% ( 1) 00:15:39.617 3.320 - 3.333: 99.5138% ( 1) 00:15:39.617 3.467 - 3.493: 99.5186% ( 1) 00:15:39.617 3.600 - 3.627: 99.5235% ( 1) 00:15:39.617 3.627 - 3.653: 99.5284% ( 1) 00:15:39.617 3.653 - 3.680: 99.5332% ( 1) 00:15:39.617 3.760 - 3.787: 99.5381% ( 1) 00:15:39.617 3.787 - 3.813: 99.5430% ( 1) 00:15:39.617 3.813 - 3.840: 99.5527% ( 2) 00:15:39.617 3.893 - 3.920: 99.5624% ( 2) 00:15:39.617 3.920 - 3.947: 99.5721% ( 2) 00:15:39.617 3.947 - 3.973: 99.5770% ( 1) 00:15:39.617 4.027 - 4.053: 99.5819% ( 1) 00:15:39.617 4.053 - 4.080: 99.5916% ( 2) 00:15:39.617 4.160 - 4.187: 99.5964% ( 1) 00:15:39.617 4.453 - 4.480: 99.6013% ( 1) 00:15:39.617 4.667 - 4.693: 99.6062% ( 1) 00:15:39.617 5.040 - 5.067: 99.6110% ( 1) 00:15:39.617 5.200 - 5.227: 99.6159% ( 1) 00:15:39.617 7.787 - 7.840: 99.6208% ( 1) 00:15:39.617 9.120 - 9.173: 99.6256% ( 1) 00:15:39.617 33.920 - 34.133: 99.6305% ( 1) 00:15:39.617 134.827 - 135.680: 99.6353% ( 1) 00:15:39.617 3986.773 - 4014.080: 100.0000% ( 75) 00:15:39.617 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.617 [ 00:15:39.617 { 00:15:39.617 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.617 "subtype": "Discovery", 00:15:39.617 "listen_addresses": [], 00:15:39.617 "allow_any_host": true, 00:15:39.617 "hosts": [] 00:15:39.617 }, 00:15:39.617 { 00:15:39.617 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.617 "subtype": "NVMe", 00:15:39.617 "listen_addresses": [ 00:15:39.617 { 00:15:39.617 "trtype": "VFIOUSER", 00:15:39.617 "adrfam": "IPv4", 00:15:39.617 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.617 "trsvcid": "0" 00:15:39.617 } 00:15:39.617 ], 00:15:39.617 "allow_any_host": true, 00:15:39.617 "hosts": [], 00:15:39.617 "serial_number": "SPDK1", 00:15:39.617 "model_number": "SPDK bdev Controller", 00:15:39.617 "max_namespaces": 32, 00:15:39.617 "min_cntlid": 1, 00:15:39.617 "max_cntlid": 65519, 00:15:39.617 "namespaces": [ 00:15:39.617 { 00:15:39.617 "nsid": 1, 00:15:39.617 "bdev_name": "Malloc1", 00:15:39.617 "name": "Malloc1", 00:15:39.617 "nguid": "9226604F4FE54A1DACD63ACED93BF876", 00:15:39.617 "uuid": "9226604f-4fe5-4a1d-acd6-3aced93bf876" 00:15:39.617 }, 00:15:39.617 { 00:15:39.617 "nsid": 2, 00:15:39.617 "bdev_name": "Malloc3", 00:15:39.617 "name": "Malloc3", 00:15:39.617 "nguid": "B3F7803F5CC5467F9AE09657CFFC2DF0", 00:15:39.617 "uuid": "b3f7803f-5cc5-467f-9ae0-9657cffc2df0" 00:15:39.617 } 00:15:39.617 ] 00:15:39.617 }, 00:15:39.617 { 00:15:39.617 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.617 "subtype": "NVMe", 00:15:39.617 "listen_addresses": [ 00:15:39.617 { 00:15:39.617 "trtype": "VFIOUSER", 00:15:39.617 "adrfam": "IPv4", 00:15:39.617 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.617 "trsvcid": "0" 00:15:39.617 } 00:15:39.617 ], 00:15:39.617 "allow_any_host": true, 00:15:39.617 "hosts": [], 00:15:39.617 "serial_number": "SPDK2", 00:15:39.617 "model_number": "SPDK bdev Controller", 00:15:39.617 "max_namespaces": 32, 00:15:39.617 "min_cntlid": 1, 00:15:39.617 "max_cntlid": 65519, 00:15:39.617 "namespaces": [ 00:15:39.617 { 00:15:39.617 "nsid": 1, 00:15:39.617 "bdev_name": "Malloc2", 00:15:39.617 "name": "Malloc2", 00:15:39.617 "nguid": "C8AC019F480B413CB0FDF81104D5D643", 00:15:39.617 "uuid": "c8ac019f-480b-413c-b0fd-f81104d5d643" 00:15:39.617 } 00:15:39.617 ] 00:15:39.617 } 00:15:39.617 ] 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2899325 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:39.617 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:39.878 [2024-11-26 19:05:56.845449] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.878 Malloc4 00:15:39.878 19:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:39.878 [2024-11-26 19:05:57.025671] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.878 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.878 Asynchronous Event Request test 00:15:39.878 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.878 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.878 Registering asynchronous event callbacks... 00:15:39.878 Starting namespace attribute notice tests for all controllers... 00:15:39.878 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:39.878 aer_cb - Changed Namespace 00:15:39.878 Cleaning up... 00:15:40.137 [ 00:15:40.137 { 00:15:40.137 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.137 "subtype": "Discovery", 00:15:40.137 "listen_addresses": [], 00:15:40.137 "allow_any_host": true, 00:15:40.137 "hosts": [] 00:15:40.137 }, 00:15:40.137 { 00:15:40.137 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:40.137 "subtype": "NVMe", 00:15:40.137 "listen_addresses": [ 00:15:40.137 { 00:15:40.137 "trtype": "VFIOUSER", 00:15:40.137 "adrfam": "IPv4", 00:15:40.137 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:40.137 "trsvcid": "0" 00:15:40.137 } 00:15:40.137 ], 00:15:40.137 "allow_any_host": true, 00:15:40.137 "hosts": [], 00:15:40.137 "serial_number": "SPDK1", 00:15:40.137 "model_number": "SPDK bdev Controller", 00:15:40.137 "max_namespaces": 32, 00:15:40.137 "min_cntlid": 1, 00:15:40.137 "max_cntlid": 65519, 00:15:40.137 "namespaces": [ 00:15:40.137 { 00:15:40.137 "nsid": 1, 00:15:40.137 "bdev_name": "Malloc1", 00:15:40.137 "name": "Malloc1", 00:15:40.137 "nguid": "9226604F4FE54A1DACD63ACED93BF876", 00:15:40.137 "uuid": "9226604f-4fe5-4a1d-acd6-3aced93bf876" 00:15:40.137 }, 00:15:40.137 { 00:15:40.137 "nsid": 2, 00:15:40.137 "bdev_name": "Malloc3", 00:15:40.137 "name": "Malloc3", 00:15:40.137 "nguid": "B3F7803F5CC5467F9AE09657CFFC2DF0", 00:15:40.137 "uuid": "b3f7803f-5cc5-467f-9ae0-9657cffc2df0" 00:15:40.137 } 00:15:40.137 ] 00:15:40.137 }, 00:15:40.137 { 00:15:40.137 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:40.137 "subtype": "NVMe", 00:15:40.137 "listen_addresses": [ 00:15:40.137 { 00:15:40.137 "trtype": "VFIOUSER", 00:15:40.137 "adrfam": "IPv4", 00:15:40.137 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:40.137 "trsvcid": "0" 00:15:40.137 } 00:15:40.137 ], 00:15:40.137 "allow_any_host": true, 00:15:40.137 "hosts": [], 00:15:40.137 "serial_number": "SPDK2", 00:15:40.137 "model_number": "SPDK bdev Controller", 00:15:40.137 "max_namespaces": 32, 00:15:40.137 "min_cntlid": 1, 00:15:40.137 "max_cntlid": 65519, 00:15:40.137 "namespaces": [ 00:15:40.137 { 00:15:40.137 "nsid": 1, 00:15:40.137 "bdev_name": "Malloc2", 00:15:40.137 "name": "Malloc2", 00:15:40.137 "nguid": "C8AC019F480B413CB0FDF81104D5D643", 00:15:40.137 "uuid": "c8ac019f-480b-413c-b0fd-f81104d5d643" 00:15:40.137 }, 00:15:40.137 { 00:15:40.137 "nsid": 2, 00:15:40.137 "bdev_name": "Malloc4", 00:15:40.137 "name": "Malloc4", 00:15:40.137 "nguid": "34D16C3E5EBD451388BD46B1D245F643", 00:15:40.137 "uuid": "34d16c3e-5ebd-4513-88bd-46b1d245f643" 00:15:40.137 } 00:15:40.137 ] 00:15:40.137 } 00:15:40.137 ] 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2899325 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2890446 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2890446 ']' 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2890446 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2890446 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2890446' 00:15:40.137 killing process with pid 2890446 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2890446 00:15:40.137 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2890446 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2899544 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2899544' 00:15:40.398 Process pid: 2899544 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2899544 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2899544 ']' 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.398 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:40.398 [2024-11-26 19:05:57.509502] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:40.398 [2024-11-26 19:05:57.510434] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:15:40.398 [2024-11-26 19:05:57.510477] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.398 [2024-11-26 19:05:57.594925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.659 [2024-11-26 19:05:57.624495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.659 [2024-11-26 19:05:57.624528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.659 [2024-11-26 19:05:57.624535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.659 [2024-11-26 19:05:57.624539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.659 [2024-11-26 19:05:57.624544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.659 [2024-11-26 19:05:57.625795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.659 [2024-11-26 19:05:57.625943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.659 [2024-11-26 19:05:57.626090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.659 [2024-11-26 19:05:57.626092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.659 [2024-11-26 19:05:57.678180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:40.659 [2024-11-26 19:05:57.679150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:40.659 [2024-11-26 19:05:57.680008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:40.659 [2024-11-26 19:05:57.680416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:40.659 [2024-11-26 19:05:57.680438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:41.230 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.230 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:41.230 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:42.169 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:42.429 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:42.429 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:42.429 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.429 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:42.429 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:42.688 Malloc1 00:15:42.688 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:42.949 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:42.949 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:43.209 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.209 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:43.209 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:43.469 Malloc2 00:15:43.469 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:43.730 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:43.730 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:43.990 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:43.990 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2899544 00:15:43.990 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2899544 ']' 00:15:43.990 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2899544 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2899544 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2899544' 00:15:43.991 killing process with pid 2899544 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2899544 00:15:43.991 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2899544 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.250 00:15:44.250 real 0m51.002s 00:15:44.250 user 3m15.461s 00:15:44.250 sys 0m2.650s 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:44.250 ************************************ 00:15:44.250 END TEST nvmf_vfio_user 00:15:44.250 ************************************ 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.250 ************************************ 00:15:44.250 START TEST nvmf_vfio_user_nvme_compliance 00:15:44.250 ************************************ 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:44.250 * Looking for test storage... 00:15:44.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:44.250 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.511 --rc genhtml_branch_coverage=1 00:15:44.511 --rc genhtml_function_coverage=1 00:15:44.511 --rc genhtml_legend=1 00:15:44.511 --rc geninfo_all_blocks=1 00:15:44.511 --rc geninfo_unexecuted_blocks=1 00:15:44.511 00:15:44.511 ' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.511 --rc genhtml_branch_coverage=1 00:15:44.511 --rc genhtml_function_coverage=1 00:15:44.511 --rc genhtml_legend=1 00:15:44.511 --rc geninfo_all_blocks=1 00:15:44.511 --rc geninfo_unexecuted_blocks=1 00:15:44.511 00:15:44.511 ' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.511 --rc genhtml_branch_coverage=1 00:15:44.511 --rc genhtml_function_coverage=1 00:15:44.511 --rc genhtml_legend=1 00:15:44.511 --rc geninfo_all_blocks=1 00:15:44.511 --rc geninfo_unexecuted_blocks=1 00:15:44.511 00:15:44.511 ' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:44.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.511 --rc genhtml_branch_coverage=1 00:15:44.511 --rc genhtml_function_coverage=1 00:15:44.511 --rc genhtml_legend=1 00:15:44.511 --rc geninfo_all_blocks=1 00:15:44.511 --rc geninfo_unexecuted_blocks=1 00:15:44.511 00:15:44.511 ' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.511 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2900389 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2900389' 00:15:44.512 Process pid: 2900389 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2900389 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2900389 ']' 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.512 19:06:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:44.512 [2024-11-26 19:06:01.612016] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:15:44.512 [2024-11-26 19:06:01.612074] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.512 [2024-11-26 19:06:01.696703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.772 [2024-11-26 19:06:01.730743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.772 [2024-11-26 19:06:01.730776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.772 [2024-11-26 19:06:01.730782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.772 [2024-11-26 19:06:01.730787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.772 [2024-11-26 19:06:01.730791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.772 [2024-11-26 19:06:01.732191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.772 [2024-11-26 19:06:01.732285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.772 [2024-11-26 19:06:01.732286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.344 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.344 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:45.344 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.286 malloc0 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.286 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:46.548 00:15:46.548 00:15:46.548 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.548 http://cunit.sourceforge.net/ 00:15:46.548 00:15:46.548 00:15:46.548 Suite: nvme_compliance 00:15:46.548 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 19:06:03.648087] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.548 [2024-11-26 19:06:03.649399] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:46.548 [2024-11-26 19:06:03.649412] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:46.548 [2024-11-26 19:06:03.649417] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:46.548 [2024-11-26 19:06:03.651106] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.548 passed 00:15:46.548 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 19:06:03.726578] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.548 [2024-11-26 19:06:03.729596] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.548 passed 00:15:46.809 Test: admin_identify_ns ...[2024-11-26 19:06:03.808172] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.809 [2024-11-26 19:06:03.870166] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:46.809 [2024-11-26 19:06:03.878168] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:46.809 [2024-11-26 19:06:03.899249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.809 passed 00:15:46.809 Test: admin_get_features_mandatory_features ...[2024-11-26 19:06:03.972484] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.810 [2024-11-26 19:06:03.975502] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.810 passed 00:15:47.070 Test: admin_get_features_optional_features ...[2024-11-26 19:06:04.051963] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.070 [2024-11-26 19:06:04.054976] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.070 passed 00:15:47.070 Test: admin_set_features_number_of_queues ...[2024-11-26 19:06:04.131730] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.070 [2024-11-26 19:06:04.237250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.070 passed 00:15:47.331 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 19:06:04.311476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.331 [2024-11-26 19:06:04.314499] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.331 passed 00:15:47.331 Test: admin_get_log_page_with_lpo ...[2024-11-26 19:06:04.390222] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.331 [2024-11-26 19:06:04.459166] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:47.331 [2024-11-26 19:06:04.472221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.331 passed 00:15:47.591 Test: fabric_property_get ...[2024-11-26 19:06:04.545432] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.591 [2024-11-26 19:06:04.546631] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:47.591 [2024-11-26 19:06:04.548443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.591 passed 00:15:47.591 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 19:06:04.623909] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.591 [2024-11-26 19:06:04.625110] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:47.591 [2024-11-26 19:06:04.626933] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.591 passed 00:15:47.591 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 19:06:04.702730] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.591 [2024-11-26 19:06:04.787163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:47.851 [2024-11-26 19:06:04.803162] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:47.851 [2024-11-26 19:06:04.808235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.851 passed 00:15:47.851 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 19:06:04.881448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.851 [2024-11-26 19:06:04.882647] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:47.851 [2024-11-26 19:06:04.884475] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.851 passed 00:15:47.851 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 19:06:04.959507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.851 [2024-11-26 19:06:05.039171] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:48.112 [2024-11-26 19:06:05.063167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.112 [2024-11-26 19:06:05.068227] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.112 passed 00:15:48.112 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 19:06:05.140413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.112 [2024-11-26 19:06:05.141610] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:48.112 [2024-11-26 19:06:05.141628] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:48.112 [2024-11-26 19:06:05.143436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.112 passed 00:15:48.112 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 19:06:05.219511] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.112 [2024-11-26 19:06:05.311164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:48.112 [2024-11-26 19:06:05.319168] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:48.373 [2024-11-26 19:06:05.327171] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:48.373 [2024-11-26 19:06:05.335163] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:48.373 [2024-11-26 19:06:05.364235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.373 passed 00:15:48.373 Test: admin_create_io_sq_verify_pc ...[2024-11-26 19:06:05.439251] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.373 [2024-11-26 19:06:05.457170] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:48.373 [2024-11-26 19:06:05.474400] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.373 passed 00:15:48.373 Test: admin_create_io_qp_max_qps ...[2024-11-26 19:06:05.549837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.757 [2024-11-26 19:06:06.647166] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:50.018 [2024-11-26 19:06:07.029618] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.018 passed 00:15:50.018 Test: admin_create_io_sq_shared_cq ...[2024-11-26 19:06:07.103380] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.279 [2024-11-26 19:06:07.236164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:50.279 [2024-11-26 19:06:07.273212] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.279 passed 00:15:50.279 00:15:50.279 Run Summary: Type Total Ran Passed Failed Inactive 00:15:50.279 suites 1 1 n/a 0 0 00:15:50.279 tests 18 18 18 0 0 00:15:50.279 asserts 360 360 360 0 n/a 00:15:50.279 00:15:50.279 Elapsed time = 1.490 seconds 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2900389 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2900389 ']' 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2900389 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900389 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900389' 00:15:50.279 killing process with pid 2900389 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2900389 00:15:50.279 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2900389 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:50.540 00:15:50.540 real 0m6.180s 00:15:50.540 user 0m17.514s 00:15:50.540 sys 0m0.530s 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 ************************************ 00:15:50.540 END TEST nvmf_vfio_user_nvme_compliance 00:15:50.540 ************************************ 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.540 ************************************ 00:15:50.540 START TEST nvmf_vfio_user_fuzz 00:15:50.540 ************************************ 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:50.540 * Looking for test storage... 00:15:50.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.540 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.801 --rc genhtml_branch_coverage=1 00:15:50.801 --rc genhtml_function_coverage=1 00:15:50.801 --rc genhtml_legend=1 00:15:50.801 --rc geninfo_all_blocks=1 00:15:50.801 --rc geninfo_unexecuted_blocks=1 00:15:50.801 00:15:50.801 ' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.801 --rc genhtml_branch_coverage=1 00:15:50.801 --rc genhtml_function_coverage=1 00:15:50.801 --rc genhtml_legend=1 00:15:50.801 --rc geninfo_all_blocks=1 00:15:50.801 --rc geninfo_unexecuted_blocks=1 00:15:50.801 00:15:50.801 ' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.801 --rc genhtml_branch_coverage=1 00:15:50.801 --rc genhtml_function_coverage=1 00:15:50.801 --rc genhtml_legend=1 00:15:50.801 --rc geninfo_all_blocks=1 00:15:50.801 --rc geninfo_unexecuted_blocks=1 00:15:50.801 00:15:50.801 ' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.801 --rc genhtml_branch_coverage=1 00:15:50.801 --rc genhtml_function_coverage=1 00:15:50.801 --rc genhtml_legend=1 00:15:50.801 --rc geninfo_all_blocks=1 00:15:50.801 --rc geninfo_unexecuted_blocks=1 00:15:50.801 00:15:50.801 ' 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.801 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2901806 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2901806' 00:15:50.802 Process pid: 2901806 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2901806 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2901806 ']' 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.802 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:51.743 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.743 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:51.743 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 malloc0 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:52.684 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:24.887 Fuzzing completed. Shutting down the fuzz application 00:16:24.887 00:16:24.887 Dumping successful admin opcodes: 00:16:24.887 9, 10, 00:16:24.887 Dumping successful io opcodes: 00:16:24.887 0, 00:16:24.887 NS: 0x20000081ef00 I/O qp, Total commands completed: 1408920, total successful commands: 5536, random_seed: 204363840 00:16:24.887 NS: 0x20000081ef00 admin qp, Total commands completed: 350528, total successful commands: 94, random_seed: 1169168512 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2901806 ']' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901806' 00:16:24.887 killing process with pid 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2901806 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:24.887 00:16:24.887 real 0m32.820s 00:16:24.887 user 0m37.856s 00:16:24.887 sys 0m24.174s 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 ************************************ 00:16:24.887 END TEST nvmf_vfio_user_fuzz 00:16:24.887 ************************************ 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 ************************************ 00:16:24.887 START TEST nvmf_auth_target 00:16:24.887 ************************************ 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:24.887 * Looking for test storage... 00:16:24.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.887 --rc genhtml_branch_coverage=1 00:16:24.887 --rc genhtml_function_coverage=1 00:16:24.887 --rc genhtml_legend=1 00:16:24.887 --rc geninfo_all_blocks=1 00:16:24.887 --rc geninfo_unexecuted_blocks=1 00:16:24.887 00:16:24.887 ' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.887 --rc genhtml_branch_coverage=1 00:16:24.887 --rc genhtml_function_coverage=1 00:16:24.887 --rc genhtml_legend=1 00:16:24.887 --rc geninfo_all_blocks=1 00:16:24.887 --rc geninfo_unexecuted_blocks=1 00:16:24.887 00:16:24.887 ' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.887 --rc genhtml_branch_coverage=1 00:16:24.887 --rc genhtml_function_coverage=1 00:16:24.887 --rc genhtml_legend=1 00:16:24.887 --rc geninfo_all_blocks=1 00:16:24.887 --rc geninfo_unexecuted_blocks=1 00:16:24.887 00:16:24.887 ' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.887 --rc genhtml_branch_coverage=1 00:16:24.887 --rc genhtml_function_coverage=1 00:16:24.887 --rc genhtml_legend=1 00:16:24.887 --rc geninfo_all_blocks=1 00:16:24.887 --rc geninfo_unexecuted_blocks=1 00:16:24.887 00:16:24.887 ' 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.887 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.888 19:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:31.476 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:31.476 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:31.476 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:31.476 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:31.476 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:31.477 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:31.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:16:31.477 00:16:31.477 --- 10.0.0.2 ping statistics --- 00:16:31.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.477 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:16:31.477 00:16:31.477 --- 10.0.0.1 ping statistics --- 00:16:31.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.477 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2912257 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2912257 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2912257 ']' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.477 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.050 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.050 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:32.050 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:32.050 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:32.050 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2912437 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d7d9477cf2aad4c9c7346a31e2fd67a09d229940eb440277 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MUn 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d7d9477cf2aad4c9c7346a31e2fd67a09d229940eb440277 0 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d7d9477cf2aad4c9c7346a31e2fd67a09d229940eb440277 0 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d7d9477cf2aad4c9c7346a31e2fd67a09d229940eb440277 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MUn 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MUn 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.MUn 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff5df992d50508ba72ee185d1d2c29b629e4ff4080760b35d60a9f0edd437626 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aOq 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff5df992d50508ba72ee185d1d2c29b629e4ff4080760b35d60a9f0edd437626 3 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff5df992d50508ba72ee185d1d2c29b629e4ff4080760b35d60a9f0edd437626 3 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff5df992d50508ba72ee185d1d2c29b629e4ff4080760b35d60a9f0edd437626 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aOq 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aOq 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.aOq 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a8620278ea5c769cf9fb3a72bc64558c 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tBr 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a8620278ea5c769cf9fb3a72bc64558c 1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a8620278ea5c769cf9fb3a72bc64558c 1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a8620278ea5c769cf9fb3a72bc64558c 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:32.051 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tBr 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tBr 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tBr 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=327728b4bc45edc3703f685c46a1709416837f35ad35ab70 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.P6b 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 327728b4bc45edc3703f685c46a1709416837f35ad35ab70 2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 327728b4bc45edc3703f685c46a1709416837f35ad35ab70 2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=327728b4bc45edc3703f685c46a1709416837f35ad35ab70 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.P6b 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.P6b 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.P6b 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9a2b22b6d7952a92d8e4495c202642fcc8b2bb13b2427984 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Vm 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9a2b22b6d7952a92d8e4495c202642fcc8b2bb13b2427984 2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9a2b22b6d7952a92d8e4495c202642fcc8b2bb13b2427984 2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9a2b22b6d7952a92d8e4495c202642fcc8b2bb13b2427984 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Vm 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Vm 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.0Vm 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2fe2a7009dfcbe773460042c79eceb7d 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zYl 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2fe2a7009dfcbe773460042c79eceb7d 1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2fe2a7009dfcbe773460042c79eceb7d 1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2fe2a7009dfcbe773460042c79eceb7d 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zYl 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zYl 00:16:32.312 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.zYl 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aba47abe0e1cdcb3f450c344ae8fa2a57706460aa0e3a4a88c29e7690bd0086d 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HAm 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aba47abe0e1cdcb3f450c344ae8fa2a57706460aa0e3a4a88c29e7690bd0086d 3 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aba47abe0e1cdcb3f450c344ae8fa2a57706460aa0e3a4a88c29e7690bd0086d 3 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aba47abe0e1cdcb3f450c344ae8fa2a57706460aa0e3a4a88c29e7690bd0086d 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HAm 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HAm 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.HAm 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2912257 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2912257 ']' 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2912437 /var/tmp/host.sock 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2912437 ']' 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.573 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.833 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.833 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:32.834 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:32.834 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.834 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MUn 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MUn 00:16:32.834 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MUn 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.aOq ]] 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aOq 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aOq 00:16:33.094 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aOq 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tBr 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tBr 00:16:33.354 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tBr 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.P6b ]] 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P6b 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P6b 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P6b 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Vm 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0Vm 00:16:33.615 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0Vm 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.zYl ]] 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zYl 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zYl 00:16:33.876 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zYl 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HAm 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.HAm 00:16:34.136 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.HAm 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.397 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.658 00:16:34.658 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.658 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.658 19:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.918 { 00:16:34.918 "cntlid": 1, 00:16:34.918 "qid": 0, 00:16:34.918 "state": "enabled", 00:16:34.918 "thread": "nvmf_tgt_poll_group_000", 00:16:34.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.918 "listen_address": { 00:16:34.918 "trtype": "TCP", 00:16:34.918 "adrfam": "IPv4", 00:16:34.918 "traddr": "10.0.0.2", 00:16:34.918 "trsvcid": "4420" 00:16:34.918 }, 00:16:34.918 "peer_address": { 00:16:34.918 "trtype": "TCP", 00:16:34.918 "adrfam": "IPv4", 00:16:34.918 "traddr": "10.0.0.1", 00:16:34.918 "trsvcid": "43344" 00:16:34.918 }, 00:16:34.918 "auth": { 00:16:34.918 "state": "completed", 00:16:34.918 "digest": "sha256", 00:16:34.918 "dhgroup": "null" 00:16:34.918 } 00:16:34.918 } 00:16:34.918 ]' 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.918 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:35.179 19:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:35.829 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.829 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.829 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.829 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.090 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.350 00:16:36.350 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.350 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.350 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.611 { 00:16:36.611 "cntlid": 3, 00:16:36.611 "qid": 0, 00:16:36.611 "state": "enabled", 00:16:36.611 "thread": "nvmf_tgt_poll_group_000", 00:16:36.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.611 "listen_address": { 00:16:36.611 "trtype": "TCP", 00:16:36.611 "adrfam": "IPv4", 00:16:36.611 "traddr": "10.0.0.2", 00:16:36.611 "trsvcid": "4420" 00:16:36.611 }, 00:16:36.611 "peer_address": { 00:16:36.611 "trtype": "TCP", 00:16:36.611 "adrfam": "IPv4", 00:16:36.611 "traddr": "10.0.0.1", 00:16:36.611 "trsvcid": "43378" 00:16:36.611 }, 00:16:36.611 "auth": { 00:16:36.611 "state": "completed", 00:16:36.611 "digest": "sha256", 00:16:36.611 "dhgroup": "null" 00:16:36.611 } 00:16:36.611 } 00:16:36.611 ]' 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.611 19:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.872 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:36.872 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.814 19:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.075 00:16:38.075 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.075 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.075 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.335 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.335 { 00:16:38.335 "cntlid": 5, 00:16:38.335 "qid": 0, 00:16:38.335 "state": "enabled", 00:16:38.336 "thread": "nvmf_tgt_poll_group_000", 00:16:38.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.336 "listen_address": { 00:16:38.336 "trtype": "TCP", 00:16:38.336 "adrfam": "IPv4", 00:16:38.336 "traddr": "10.0.0.2", 00:16:38.336 "trsvcid": "4420" 00:16:38.336 }, 00:16:38.336 "peer_address": { 00:16:38.336 "trtype": "TCP", 00:16:38.336 "adrfam": "IPv4", 00:16:38.336 "traddr": "10.0.0.1", 00:16:38.336 "trsvcid": "54380" 00:16:38.336 }, 00:16:38.336 "auth": { 00:16:38.336 "state": "completed", 00:16:38.336 "digest": "sha256", 00:16:38.336 "dhgroup": "null" 00:16:38.336 } 00:16:38.336 } 00:16:38.336 ]' 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.336 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.596 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:38.596 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.167 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.427 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.687 00:16:39.687 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.687 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.687 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.948 { 00:16:39.948 "cntlid": 7, 00:16:39.948 "qid": 0, 00:16:39.948 "state": "enabled", 00:16:39.948 "thread": "nvmf_tgt_poll_group_000", 00:16:39.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.948 "listen_address": { 00:16:39.948 "trtype": "TCP", 00:16:39.948 "adrfam": "IPv4", 00:16:39.948 "traddr": "10.0.0.2", 00:16:39.948 "trsvcid": "4420" 00:16:39.948 }, 00:16:39.948 "peer_address": { 00:16:39.948 "trtype": "TCP", 00:16:39.948 "adrfam": "IPv4", 00:16:39.948 "traddr": "10.0.0.1", 00:16:39.948 "trsvcid": "54406" 00:16:39.948 }, 00:16:39.948 "auth": { 00:16:39.948 "state": "completed", 00:16:39.948 "digest": "sha256", 00:16:39.948 "dhgroup": "null" 00:16:39.948 } 00:16:39.948 } 00:16:39.948 ]' 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.948 19:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.948 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.948 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.948 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.948 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.948 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.208 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:40.208 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.789 19:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.049 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.309 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.309 { 00:16:41.309 "cntlid": 9, 00:16:41.309 "qid": 0, 00:16:41.309 "state": "enabled", 00:16:41.309 "thread": "nvmf_tgt_poll_group_000", 00:16:41.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.309 "listen_address": { 00:16:41.309 "trtype": "TCP", 00:16:41.309 "adrfam": "IPv4", 00:16:41.309 "traddr": "10.0.0.2", 00:16:41.309 "trsvcid": "4420" 00:16:41.309 }, 00:16:41.309 "peer_address": { 00:16:41.309 "trtype": "TCP", 00:16:41.309 "adrfam": "IPv4", 00:16:41.309 "traddr": "10.0.0.1", 00:16:41.309 "trsvcid": "54438" 00:16:41.309 }, 00:16:41.309 "auth": { 00:16:41.309 "state": "completed", 00:16:41.309 "digest": "sha256", 00:16:41.309 "dhgroup": "ffdhe2048" 00:16:41.309 } 00:16:41.309 } 00:16:41.309 ]' 00:16:41.309 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.570 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.831 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:41.831 19:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.399 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.659 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.919 00:16:42.919 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.919 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.919 19:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.919 { 00:16:42.919 "cntlid": 11, 00:16:42.919 "qid": 0, 00:16:42.919 "state": "enabled", 00:16:42.919 "thread": "nvmf_tgt_poll_group_000", 00:16:42.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.919 "listen_address": { 00:16:42.919 "trtype": "TCP", 00:16:42.919 "adrfam": "IPv4", 00:16:42.919 "traddr": "10.0.0.2", 00:16:42.919 "trsvcid": "4420" 00:16:42.919 }, 00:16:42.919 "peer_address": { 00:16:42.919 "trtype": "TCP", 00:16:42.919 "adrfam": "IPv4", 00:16:42.919 "traddr": "10.0.0.1", 00:16:42.919 "trsvcid": "54464" 00:16:42.919 }, 00:16:42.919 "auth": { 00:16:42.919 "state": "completed", 00:16:42.919 "digest": "sha256", 00:16:42.919 "dhgroup": "ffdhe2048" 00:16:42.919 } 00:16:42.919 } 00:16:42.919 ]' 00:16:42.919 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.179 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.439 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:43.439 19:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.010 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.011 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.011 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.271 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.272 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.532 00:16:44.532 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.532 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.532 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.792 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.792 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.792 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.792 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.793 { 00:16:44.793 "cntlid": 13, 00:16:44.793 "qid": 0, 00:16:44.793 "state": "enabled", 00:16:44.793 "thread": "nvmf_tgt_poll_group_000", 00:16:44.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.793 "listen_address": { 00:16:44.793 "trtype": "TCP", 00:16:44.793 "adrfam": "IPv4", 00:16:44.793 "traddr": "10.0.0.2", 00:16:44.793 "trsvcid": "4420" 00:16:44.793 }, 00:16:44.793 "peer_address": { 00:16:44.793 "trtype": "TCP", 00:16:44.793 "adrfam": "IPv4", 00:16:44.793 "traddr": "10.0.0.1", 00:16:44.793 "trsvcid": "54498" 00:16:44.793 }, 00:16:44.793 "auth": { 00:16:44.793 "state": "completed", 00:16:44.793 "digest": "sha256", 00:16:44.793 "dhgroup": "ffdhe2048" 00:16:44.793 } 00:16:44.793 } 00:16:44.793 ]' 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.793 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.054 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:45.054 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.625 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.887 19:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.148 00:16:46.148 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.148 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.148 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.409 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.409 { 00:16:46.409 "cntlid": 15, 00:16:46.409 "qid": 0, 00:16:46.409 "state": "enabled", 00:16:46.410 "thread": "nvmf_tgt_poll_group_000", 00:16:46.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.410 "listen_address": { 00:16:46.410 "trtype": "TCP", 00:16:46.410 "adrfam": "IPv4", 00:16:46.410 "traddr": "10.0.0.2", 00:16:46.410 "trsvcid": "4420" 00:16:46.410 }, 00:16:46.410 "peer_address": { 00:16:46.410 "trtype": "TCP", 00:16:46.410 "adrfam": "IPv4", 00:16:46.410 "traddr": "10.0.0.1", 00:16:46.410 "trsvcid": "54506" 00:16:46.410 }, 00:16:46.410 "auth": { 00:16:46.410 "state": "completed", 00:16:46.410 "digest": "sha256", 00:16:46.410 "dhgroup": "ffdhe2048" 00:16:46.410 } 00:16:46.410 } 00:16:46.410 ]' 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.410 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.670 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:46.670 19:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.241 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.502 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.762 00:16:47.763 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.763 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.763 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.023 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.023 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.023 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.023 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.023 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.024 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.024 { 00:16:48.024 "cntlid": 17, 00:16:48.024 "qid": 0, 00:16:48.024 "state": "enabled", 00:16:48.024 "thread": "nvmf_tgt_poll_group_000", 00:16:48.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.024 "listen_address": { 00:16:48.024 "trtype": "TCP", 00:16:48.024 "adrfam": "IPv4", 00:16:48.024 "traddr": "10.0.0.2", 00:16:48.024 "trsvcid": "4420" 00:16:48.024 }, 00:16:48.024 "peer_address": { 00:16:48.024 "trtype": "TCP", 00:16:48.024 "adrfam": "IPv4", 00:16:48.024 "traddr": "10.0.0.1", 00:16:48.024 "trsvcid": "46114" 00:16:48.024 }, 00:16:48.024 "auth": { 00:16:48.024 "state": "completed", 00:16:48.024 "digest": "sha256", 00:16:48.024 "dhgroup": "ffdhe3072" 00:16:48.024 } 00:16:48.024 } 00:16:48.024 ]' 00:16:48.024 19:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.024 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.284 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:48.284 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.854 19:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.114 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.115 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.374 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.374 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.635 { 00:16:49.635 "cntlid": 19, 00:16:49.635 "qid": 0, 00:16:49.635 "state": "enabled", 00:16:49.635 "thread": "nvmf_tgt_poll_group_000", 00:16:49.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.635 "listen_address": { 00:16:49.635 "trtype": "TCP", 00:16:49.635 "adrfam": "IPv4", 00:16:49.635 "traddr": "10.0.0.2", 00:16:49.635 "trsvcid": "4420" 00:16:49.635 }, 00:16:49.635 "peer_address": { 00:16:49.635 "trtype": "TCP", 00:16:49.635 "adrfam": "IPv4", 00:16:49.635 "traddr": "10.0.0.1", 00:16:49.635 "trsvcid": "46142" 00:16:49.635 }, 00:16:49.635 "auth": { 00:16:49.635 "state": "completed", 00:16:49.635 "digest": "sha256", 00:16:49.635 "dhgroup": "ffdhe3072" 00:16:49.635 } 00:16:49.635 } 00:16:49.635 ]' 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.635 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.895 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:49.895 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.472 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.903 19:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.903 00:16:50.903 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.903 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.903 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.168 { 00:16:51.168 "cntlid": 21, 00:16:51.168 "qid": 0, 00:16:51.168 "state": "enabled", 00:16:51.168 "thread": "nvmf_tgt_poll_group_000", 00:16:51.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.168 "listen_address": { 00:16:51.168 "trtype": "TCP", 00:16:51.168 "adrfam": "IPv4", 00:16:51.168 "traddr": "10.0.0.2", 00:16:51.168 "trsvcid": "4420" 00:16:51.168 }, 00:16:51.168 "peer_address": { 00:16:51.168 "trtype": "TCP", 00:16:51.168 "adrfam": "IPv4", 00:16:51.168 "traddr": "10.0.0.1", 00:16:51.168 "trsvcid": "46172" 00:16:51.168 }, 00:16:51.168 "auth": { 00:16:51.168 "state": "completed", 00:16:51.168 "digest": "sha256", 00:16:51.168 "dhgroup": "ffdhe3072" 00:16:51.168 } 00:16:51.168 } 00:16:51.168 ]' 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.168 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.430 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.430 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.430 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.430 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:51.430 19:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.371 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.372 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.632 00:16:52.632 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.632 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.632 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.892 { 00:16:52.892 "cntlid": 23, 00:16:52.892 "qid": 0, 00:16:52.892 "state": "enabled", 00:16:52.892 "thread": "nvmf_tgt_poll_group_000", 00:16:52.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.892 "listen_address": { 00:16:52.892 "trtype": "TCP", 00:16:52.892 "adrfam": "IPv4", 00:16:52.892 "traddr": "10.0.0.2", 00:16:52.892 "trsvcid": "4420" 00:16:52.892 }, 00:16:52.892 "peer_address": { 00:16:52.892 "trtype": "TCP", 00:16:52.892 "adrfam": "IPv4", 00:16:52.892 "traddr": "10.0.0.1", 00:16:52.892 "trsvcid": "46200" 00:16:52.892 }, 00:16:52.892 "auth": { 00:16:52.892 "state": "completed", 00:16:52.892 "digest": "sha256", 00:16:52.892 "dhgroup": "ffdhe3072" 00:16:52.892 } 00:16:52.892 } 00:16:52.892 ]' 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.892 19:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.892 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.892 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.892 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.892 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.892 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.153 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:53.153 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.724 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.985 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.985 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.245 00:16:54.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.245 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.507 { 00:16:54.507 "cntlid": 25, 00:16:54.507 "qid": 0, 00:16:54.507 "state": "enabled", 00:16:54.507 "thread": "nvmf_tgt_poll_group_000", 00:16:54.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.507 "listen_address": { 00:16:54.507 "trtype": "TCP", 00:16:54.507 "adrfam": "IPv4", 00:16:54.507 "traddr": "10.0.0.2", 00:16:54.507 "trsvcid": "4420" 00:16:54.507 }, 00:16:54.507 "peer_address": { 00:16:54.507 "trtype": "TCP", 00:16:54.507 "adrfam": "IPv4", 00:16:54.507 "traddr": "10.0.0.1", 00:16:54.507 "trsvcid": "46234" 00:16:54.507 }, 00:16:54.507 "auth": { 00:16:54.507 "state": "completed", 00:16:54.507 "digest": "sha256", 00:16:54.507 "dhgroup": "ffdhe4096" 00:16:54.507 } 00:16:54.507 } 00:16:54.507 ]' 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.507 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.768 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.768 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.768 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.768 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:54.768 19:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:16:55.709 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.710 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.971 00:16:55.971 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.971 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.971 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.232 { 00:16:56.232 "cntlid": 27, 00:16:56.232 "qid": 0, 00:16:56.232 "state": "enabled", 00:16:56.232 "thread": "nvmf_tgt_poll_group_000", 00:16:56.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.232 "listen_address": { 00:16:56.232 "trtype": "TCP", 00:16:56.232 "adrfam": "IPv4", 00:16:56.232 "traddr": "10.0.0.2", 00:16:56.232 "trsvcid": "4420" 00:16:56.232 }, 00:16:56.232 "peer_address": { 00:16:56.232 "trtype": "TCP", 00:16:56.232 "adrfam": "IPv4", 00:16:56.232 "traddr": "10.0.0.1", 00:16:56.232 "trsvcid": "46252" 00:16:56.232 }, 00:16:56.232 "auth": { 00:16:56.232 "state": "completed", 00:16:56.232 "digest": "sha256", 00:16:56.232 "dhgroup": "ffdhe4096" 00:16:56.232 } 00:16:56.232 } 00:16:56.232 ]' 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.232 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.492 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:56.492 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.064 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.326 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.586 00:16:57.586 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.586 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.586 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.846 { 00:16:57.846 "cntlid": 29, 00:16:57.846 "qid": 0, 00:16:57.846 "state": "enabled", 00:16:57.846 "thread": "nvmf_tgt_poll_group_000", 00:16:57.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.846 "listen_address": { 00:16:57.846 "trtype": "TCP", 00:16:57.846 "adrfam": "IPv4", 00:16:57.846 "traddr": "10.0.0.2", 00:16:57.846 "trsvcid": "4420" 00:16:57.846 }, 00:16:57.846 "peer_address": { 00:16:57.846 "trtype": "TCP", 00:16:57.846 "adrfam": "IPv4", 00:16:57.846 "traddr": "10.0.0.1", 00:16:57.846 "trsvcid": "46268" 00:16:57.846 }, 00:16:57.846 "auth": { 00:16:57.846 "state": "completed", 00:16:57.846 "digest": "sha256", 00:16:57.846 "dhgroup": "ffdhe4096" 00:16:57.846 } 00:16:57.846 } 00:16:57.846 ]' 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.846 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.107 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:58.107 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.677 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.936 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.196 00:16:59.196 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.196 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.196 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.456 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.456 { 00:16:59.456 "cntlid": 31, 00:16:59.456 "qid": 0, 00:16:59.456 "state": "enabled", 00:16:59.456 "thread": "nvmf_tgt_poll_group_000", 00:16:59.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.456 "listen_address": { 00:16:59.456 "trtype": "TCP", 00:16:59.456 "adrfam": "IPv4", 00:16:59.456 "traddr": "10.0.0.2", 00:16:59.456 "trsvcid": "4420" 00:16:59.456 }, 00:16:59.456 "peer_address": { 00:16:59.456 "trtype": "TCP", 00:16:59.456 "adrfam": "IPv4", 00:16:59.457 "traddr": "10.0.0.1", 00:16:59.457 "trsvcid": "38184" 00:16:59.457 }, 00:16:59.457 "auth": { 00:16:59.457 "state": "completed", 00:16:59.457 "digest": "sha256", 00:16:59.457 "dhgroup": "ffdhe4096" 00:16:59.457 } 00:16:59.457 } 00:16:59.457 ]' 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.457 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.716 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:16:59.716 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.285 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.544 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.803 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.064 { 00:17:01.064 "cntlid": 33, 00:17:01.064 "qid": 0, 00:17:01.064 "state": "enabled", 00:17:01.064 "thread": "nvmf_tgt_poll_group_000", 00:17:01.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.064 "listen_address": { 00:17:01.064 "trtype": "TCP", 00:17:01.064 "adrfam": "IPv4", 00:17:01.064 "traddr": "10.0.0.2", 00:17:01.064 "trsvcid": "4420" 00:17:01.064 }, 00:17:01.064 "peer_address": { 00:17:01.064 "trtype": "TCP", 00:17:01.064 "adrfam": "IPv4", 00:17:01.064 "traddr": "10.0.0.1", 00:17:01.064 "trsvcid": "38216" 00:17:01.064 }, 00:17:01.064 "auth": { 00:17:01.064 "state": "completed", 00:17:01.064 "digest": "sha256", 00:17:01.064 "dhgroup": "ffdhe6144" 00:17:01.064 } 00:17:01.064 } 00:17:01.064 ]' 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.064 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.324 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:01.325 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.264 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.524 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.785 { 00:17:02.785 "cntlid": 35, 00:17:02.785 "qid": 0, 00:17:02.785 "state": "enabled", 00:17:02.785 "thread": "nvmf_tgt_poll_group_000", 00:17:02.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.785 "listen_address": { 00:17:02.785 "trtype": "TCP", 00:17:02.785 "adrfam": "IPv4", 00:17:02.785 "traddr": "10.0.0.2", 00:17:02.785 "trsvcid": "4420" 00:17:02.785 }, 00:17:02.785 "peer_address": { 00:17:02.785 "trtype": "TCP", 00:17:02.785 "adrfam": "IPv4", 00:17:02.785 "traddr": "10.0.0.1", 00:17:02.785 "trsvcid": "38246" 00:17:02.785 }, 00:17:02.785 "auth": { 00:17:02.785 "state": "completed", 00:17:02.785 "digest": "sha256", 00:17:02.785 "dhgroup": "ffdhe6144" 00:17:02.785 } 00:17:02.785 } 00:17:02.785 ]' 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.785 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.046 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.046 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.046 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.046 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.046 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.307 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:03.307 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.880 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.141 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.402 00:17:04.402 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.402 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.402 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.663 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.663 { 00:17:04.663 "cntlid": 37, 00:17:04.663 "qid": 0, 00:17:04.663 "state": "enabled", 00:17:04.663 "thread": "nvmf_tgt_poll_group_000", 00:17:04.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.663 "listen_address": { 00:17:04.663 "trtype": "TCP", 00:17:04.663 "adrfam": "IPv4", 00:17:04.663 "traddr": "10.0.0.2", 00:17:04.663 "trsvcid": "4420" 00:17:04.663 }, 00:17:04.663 "peer_address": { 00:17:04.663 "trtype": "TCP", 00:17:04.663 "adrfam": "IPv4", 00:17:04.663 "traddr": "10.0.0.1", 00:17:04.663 "trsvcid": "38276" 00:17:04.663 }, 00:17:04.663 "auth": { 00:17:04.663 "state": "completed", 00:17:04.663 "digest": "sha256", 00:17:04.663 "dhgroup": "ffdhe6144" 00:17:04.663 } 00:17:04.663 } 00:17:04.663 ]' 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.664 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.924 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:04.924 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.495 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.755 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.015 00:17:06.015 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.015 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.015 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.276 { 00:17:06.276 "cntlid": 39, 00:17:06.276 "qid": 0, 00:17:06.276 "state": "enabled", 00:17:06.276 "thread": "nvmf_tgt_poll_group_000", 00:17:06.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.276 "listen_address": { 00:17:06.276 "trtype": "TCP", 00:17:06.276 "adrfam": "IPv4", 00:17:06.276 "traddr": "10.0.0.2", 00:17:06.276 "trsvcid": "4420" 00:17:06.276 }, 00:17:06.276 "peer_address": { 00:17:06.276 "trtype": "TCP", 00:17:06.276 "adrfam": "IPv4", 00:17:06.276 "traddr": "10.0.0.1", 00:17:06.276 "trsvcid": "38294" 00:17:06.276 }, 00:17:06.276 "auth": { 00:17:06.276 "state": "completed", 00:17:06.276 "digest": "sha256", 00:17:06.276 "dhgroup": "ffdhe6144" 00:17:06.276 } 00:17:06.276 } 00:17:06.276 ]' 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.276 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:06.535 19:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.476 19:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.046 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.046 { 00:17:08.046 "cntlid": 41, 00:17:08.046 "qid": 0, 00:17:08.046 "state": "enabled", 00:17:08.046 "thread": "nvmf_tgt_poll_group_000", 00:17:08.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.046 "listen_address": { 00:17:08.046 "trtype": "TCP", 00:17:08.046 "adrfam": "IPv4", 00:17:08.046 "traddr": "10.0.0.2", 00:17:08.046 "trsvcid": "4420" 00:17:08.046 }, 00:17:08.046 "peer_address": { 00:17:08.046 "trtype": "TCP", 00:17:08.046 "adrfam": "IPv4", 00:17:08.046 "traddr": "10.0.0.1", 00:17:08.046 "trsvcid": "60798" 00:17:08.046 }, 00:17:08.046 "auth": { 00:17:08.046 "state": "completed", 00:17:08.046 "digest": "sha256", 00:17:08.046 "dhgroup": "ffdhe8192" 00:17:08.046 } 00:17:08.046 } 00:17:08.046 ]' 00:17:08.046 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.306 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.566 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:08.567 19:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.137 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.397 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.969 00:17:09.969 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.969 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.969 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.969 { 00:17:09.969 "cntlid": 43, 00:17:09.969 "qid": 0, 00:17:09.969 "state": "enabled", 00:17:09.969 "thread": "nvmf_tgt_poll_group_000", 00:17:09.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.969 "listen_address": { 00:17:09.969 "trtype": "TCP", 00:17:09.969 "adrfam": "IPv4", 00:17:09.969 "traddr": "10.0.0.2", 00:17:09.969 "trsvcid": "4420" 00:17:09.969 }, 00:17:09.969 "peer_address": { 00:17:09.969 "trtype": "TCP", 00:17:09.969 "adrfam": "IPv4", 00:17:09.969 "traddr": "10.0.0.1", 00:17:09.969 "trsvcid": "60822" 00:17:09.969 }, 00:17:09.969 "auth": { 00:17:09.969 "state": "completed", 00:17:09.969 "digest": "sha256", 00:17:09.969 "dhgroup": "ffdhe8192" 00:17:09.969 } 00:17:09.969 } 00:17:09.969 ]' 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.969 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:10.230 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.171 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.742 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.742 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.003 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.003 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.003 { 00:17:12.003 "cntlid": 45, 00:17:12.003 "qid": 0, 00:17:12.003 "state": "enabled", 00:17:12.003 "thread": "nvmf_tgt_poll_group_000", 00:17:12.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.003 "listen_address": { 00:17:12.003 "trtype": "TCP", 00:17:12.003 "adrfam": "IPv4", 00:17:12.003 "traddr": "10.0.0.2", 00:17:12.003 "trsvcid": "4420" 00:17:12.003 }, 00:17:12.003 "peer_address": { 00:17:12.003 "trtype": "TCP", 00:17:12.003 "adrfam": "IPv4", 00:17:12.003 "traddr": "10.0.0.1", 00:17:12.003 "trsvcid": "60850" 00:17:12.003 }, 00:17:12.003 "auth": { 00:17:12.003 "state": "completed", 00:17:12.003 "digest": "sha256", 00:17:12.003 "dhgroup": "ffdhe8192" 00:17:12.003 } 00:17:12.003 } 00:17:12.003 ]' 00:17:12.003 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.003 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.004 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.004 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.004 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.004 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.004 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.004 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.263 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:12.263 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.834 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.096 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.357 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.618 { 00:17:13.618 "cntlid": 47, 00:17:13.618 "qid": 0, 00:17:13.618 "state": "enabled", 00:17:13.618 "thread": "nvmf_tgt_poll_group_000", 00:17:13.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.618 "listen_address": { 00:17:13.618 "trtype": "TCP", 00:17:13.618 "adrfam": "IPv4", 00:17:13.618 "traddr": "10.0.0.2", 00:17:13.618 "trsvcid": "4420" 00:17:13.618 }, 00:17:13.618 "peer_address": { 00:17:13.618 "trtype": "TCP", 00:17:13.618 "adrfam": "IPv4", 00:17:13.618 "traddr": "10.0.0.1", 00:17:13.618 "trsvcid": "60876" 00:17:13.618 }, 00:17:13.618 "auth": { 00:17:13.618 "state": "completed", 00:17:13.618 "digest": "sha256", 00:17:13.618 "dhgroup": "ffdhe8192" 00:17:13.618 } 00:17:13.618 } 00:17:13.618 ]' 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.618 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.880 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.880 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.880 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.880 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.880 19:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.142 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:14.142 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.712 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.974 19:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.974 00:17:14.974 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.974 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.974 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.235 { 00:17:15.235 "cntlid": 49, 00:17:15.235 "qid": 0, 00:17:15.235 "state": "enabled", 00:17:15.235 "thread": "nvmf_tgt_poll_group_000", 00:17:15.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.235 "listen_address": { 00:17:15.235 "trtype": "TCP", 00:17:15.235 "adrfam": "IPv4", 00:17:15.235 "traddr": "10.0.0.2", 00:17:15.235 "trsvcid": "4420" 00:17:15.235 }, 00:17:15.235 "peer_address": { 00:17:15.235 "trtype": "TCP", 00:17:15.235 "adrfam": "IPv4", 00:17:15.235 "traddr": "10.0.0.1", 00:17:15.235 "trsvcid": "60906" 00:17:15.235 }, 00:17:15.235 "auth": { 00:17:15.235 "state": "completed", 00:17:15.235 "digest": "sha384", 00:17:15.235 "dhgroup": "null" 00:17:15.235 } 00:17:15.235 } 00:17:15.235 ]' 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.235 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:15.497 19:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.439 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.700 00:17:16.700 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.700 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.700 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.961 { 00:17:16.961 "cntlid": 51, 00:17:16.961 "qid": 0, 00:17:16.961 "state": "enabled", 00:17:16.961 "thread": "nvmf_tgt_poll_group_000", 00:17:16.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.961 "listen_address": { 00:17:16.961 "trtype": "TCP", 00:17:16.961 "adrfam": "IPv4", 00:17:16.961 "traddr": "10.0.0.2", 00:17:16.961 "trsvcid": "4420" 00:17:16.961 }, 00:17:16.961 "peer_address": { 00:17:16.961 "trtype": "TCP", 00:17:16.961 "adrfam": "IPv4", 00:17:16.961 "traddr": "10.0.0.1", 00:17:16.961 "trsvcid": "60944" 00:17:16.961 }, 00:17:16.961 "auth": { 00:17:16.961 "state": "completed", 00:17:16.961 "digest": "sha384", 00:17:16.961 "dhgroup": "null" 00:17:16.961 } 00:17:16.961 } 00:17:16.961 ]' 00:17:16.961 19:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.961 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.221 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:17.222 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.794 19:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.055 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.316 00:17:18.316 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.316 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.316 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.578 { 00:17:18.578 "cntlid": 53, 00:17:18.578 "qid": 0, 00:17:18.578 "state": "enabled", 00:17:18.578 "thread": "nvmf_tgt_poll_group_000", 00:17:18.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.578 "listen_address": { 00:17:18.578 "trtype": "TCP", 00:17:18.578 "adrfam": "IPv4", 00:17:18.578 "traddr": "10.0.0.2", 00:17:18.578 "trsvcid": "4420" 00:17:18.578 }, 00:17:18.578 "peer_address": { 00:17:18.578 "trtype": "TCP", 00:17:18.578 "adrfam": "IPv4", 00:17:18.578 "traddr": "10.0.0.1", 00:17:18.578 "trsvcid": "47262" 00:17:18.578 }, 00:17:18.578 "auth": { 00:17:18.578 "state": "completed", 00:17:18.578 "digest": "sha384", 00:17:18.578 "dhgroup": "null" 00:17:18.578 } 00:17:18.578 } 00:17:18.578 ]' 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.578 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.839 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:18.839 19:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.411 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.672 19:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.933 00:17:19.933 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.933 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.933 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.193 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.193 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.194 { 00:17:20.194 "cntlid": 55, 00:17:20.194 "qid": 0, 00:17:20.194 "state": "enabled", 00:17:20.194 "thread": "nvmf_tgt_poll_group_000", 00:17:20.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.194 "listen_address": { 00:17:20.194 "trtype": "TCP", 00:17:20.194 "adrfam": "IPv4", 00:17:20.194 "traddr": "10.0.0.2", 00:17:20.194 "trsvcid": "4420" 00:17:20.194 }, 00:17:20.194 "peer_address": { 00:17:20.194 "trtype": "TCP", 00:17:20.194 "adrfam": "IPv4", 00:17:20.194 "traddr": "10.0.0.1", 00:17:20.194 "trsvcid": "47288" 00:17:20.194 }, 00:17:20.194 "auth": { 00:17:20.194 "state": "completed", 00:17:20.194 "digest": "sha384", 00:17:20.194 "dhgroup": "null" 00:17:20.194 } 00:17:20.194 } 00:17:20.194 ]' 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.194 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.454 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:20.454 19:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.025 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.286 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.547 00:17:21.547 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.547 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.547 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.808 { 00:17:21.808 "cntlid": 57, 00:17:21.808 "qid": 0, 00:17:21.808 "state": "enabled", 00:17:21.808 "thread": "nvmf_tgt_poll_group_000", 00:17:21.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.808 "listen_address": { 00:17:21.808 "trtype": "TCP", 00:17:21.808 "adrfam": "IPv4", 00:17:21.808 "traddr": "10.0.0.2", 00:17:21.808 "trsvcid": "4420" 00:17:21.808 }, 00:17:21.808 "peer_address": { 00:17:21.808 "trtype": "TCP", 00:17:21.808 "adrfam": "IPv4", 00:17:21.808 "traddr": "10.0.0.1", 00:17:21.808 "trsvcid": "47322" 00:17:21.808 }, 00:17:21.808 "auth": { 00:17:21.808 "state": "completed", 00:17:21.808 "digest": "sha384", 00:17:21.808 "dhgroup": "ffdhe2048" 00:17:21.808 } 00:17:21.808 } 00:17:21.808 ]' 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.808 19:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.069 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:22.069 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.640 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.899 19:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.159 00:17:23.159 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.159 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.159 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.419 { 00:17:23.419 "cntlid": 59, 00:17:23.419 "qid": 0, 00:17:23.419 "state": "enabled", 00:17:23.419 "thread": "nvmf_tgt_poll_group_000", 00:17:23.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.419 "listen_address": { 00:17:23.419 "trtype": "TCP", 00:17:23.419 "adrfam": "IPv4", 00:17:23.419 "traddr": "10.0.0.2", 00:17:23.419 "trsvcid": "4420" 00:17:23.419 }, 00:17:23.419 "peer_address": { 00:17:23.419 "trtype": "TCP", 00:17:23.419 "adrfam": "IPv4", 00:17:23.419 "traddr": "10.0.0.1", 00:17:23.419 "trsvcid": "47362" 00:17:23.419 }, 00:17:23.419 "auth": { 00:17:23.419 "state": "completed", 00:17:23.419 "digest": "sha384", 00:17:23.419 "dhgroup": "ffdhe2048" 00:17:23.419 } 00:17:23.419 } 00:17:23.419 ]' 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.419 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.679 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:23.679 19:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.249 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.512 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.513 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.784 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.784 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.784 { 00:17:24.784 "cntlid": 61, 00:17:24.784 "qid": 0, 00:17:24.784 "state": "enabled", 00:17:24.784 "thread": "nvmf_tgt_poll_group_000", 00:17:24.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.784 "listen_address": { 00:17:24.784 "trtype": "TCP", 00:17:24.784 "adrfam": "IPv4", 00:17:24.784 "traddr": "10.0.0.2", 00:17:24.784 "trsvcid": "4420" 00:17:24.784 }, 00:17:24.784 "peer_address": { 00:17:24.784 "trtype": "TCP", 00:17:24.784 "adrfam": "IPv4", 00:17:24.784 "traddr": "10.0.0.1", 00:17:24.784 "trsvcid": "47380" 00:17:24.784 }, 00:17:24.784 "auth": { 00:17:24.784 "state": "completed", 00:17:24.784 "digest": "sha384", 00:17:24.785 "dhgroup": "ffdhe2048" 00:17:24.785 } 00:17:24.785 } 00:17:24.785 ]' 00:17:25.046 19:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.046 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.306 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:25.306 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.877 19:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.877 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.877 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.138 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.398 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.398 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.658 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.658 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.658 { 00:17:26.658 "cntlid": 63, 00:17:26.658 "qid": 0, 00:17:26.658 "state": "enabled", 00:17:26.658 "thread": "nvmf_tgt_poll_group_000", 00:17:26.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.658 "listen_address": { 00:17:26.658 "trtype": "TCP", 00:17:26.658 "adrfam": "IPv4", 00:17:26.658 "traddr": "10.0.0.2", 00:17:26.658 "trsvcid": "4420" 00:17:26.658 }, 00:17:26.658 "peer_address": { 00:17:26.658 "trtype": "TCP", 00:17:26.658 "adrfam": "IPv4", 00:17:26.658 "traddr": "10.0.0.1", 00:17:26.658 "trsvcid": "47416" 00:17:26.658 }, 00:17:26.658 "auth": { 00:17:26.658 "state": "completed", 00:17:26.658 "digest": "sha384", 00:17:26.658 "dhgroup": "ffdhe2048" 00:17:26.658 } 00:17:26.659 } 00:17:26.659 ]' 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.659 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.919 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:26.919 19:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:27.490 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.490 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.490 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.490 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.751 19:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.013 00:17:28.013 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.013 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.013 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.273 { 00:17:28.273 "cntlid": 65, 00:17:28.273 "qid": 0, 00:17:28.273 "state": "enabled", 00:17:28.273 "thread": "nvmf_tgt_poll_group_000", 00:17:28.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.273 "listen_address": { 00:17:28.273 "trtype": "TCP", 00:17:28.273 "adrfam": "IPv4", 00:17:28.273 "traddr": "10.0.0.2", 00:17:28.273 "trsvcid": "4420" 00:17:28.273 }, 00:17:28.273 "peer_address": { 00:17:28.273 "trtype": "TCP", 00:17:28.273 "adrfam": "IPv4", 00:17:28.273 "traddr": "10.0.0.1", 00:17:28.273 "trsvcid": "47918" 00:17:28.273 }, 00:17:28.273 "auth": { 00:17:28.273 "state": "completed", 00:17:28.273 "digest": "sha384", 00:17:28.273 "dhgroup": "ffdhe3072" 00:17:28.273 } 00:17:28.273 } 00:17:28.273 ]' 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.273 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.533 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:28.533 19:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:29.104 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.104 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.104 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.104 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.365 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.625 00:17:29.625 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.625 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.625 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.886 { 00:17:29.886 "cntlid": 67, 00:17:29.886 "qid": 0, 00:17:29.886 "state": "enabled", 00:17:29.886 "thread": "nvmf_tgt_poll_group_000", 00:17:29.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.886 "listen_address": { 00:17:29.886 "trtype": "TCP", 00:17:29.886 "adrfam": "IPv4", 00:17:29.886 "traddr": "10.0.0.2", 00:17:29.886 "trsvcid": "4420" 00:17:29.886 }, 00:17:29.886 "peer_address": { 00:17:29.886 "trtype": "TCP", 00:17:29.886 "adrfam": "IPv4", 00:17:29.886 "traddr": "10.0.0.1", 00:17:29.886 "trsvcid": "47950" 00:17:29.886 }, 00:17:29.886 "auth": { 00:17:29.886 "state": "completed", 00:17:29.886 "digest": "sha384", 00:17:29.886 "dhgroup": "ffdhe3072" 00:17:29.886 } 00:17:29.886 } 00:17:29.886 ]' 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.886 19:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.886 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.886 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.886 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.886 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.886 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.147 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:30.147 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:30.718 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.980 19:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.980 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.242 00:17:31.242 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.242 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.242 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.503 { 00:17:31.503 "cntlid": 69, 00:17:31.503 "qid": 0, 00:17:31.503 "state": "enabled", 00:17:31.503 "thread": "nvmf_tgt_poll_group_000", 00:17:31.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.503 "listen_address": { 00:17:31.503 "trtype": "TCP", 00:17:31.503 "adrfam": "IPv4", 00:17:31.503 "traddr": "10.0.0.2", 00:17:31.503 "trsvcid": "4420" 00:17:31.503 }, 00:17:31.503 "peer_address": { 00:17:31.503 "trtype": "TCP", 00:17:31.503 "adrfam": "IPv4", 00:17:31.503 "traddr": "10.0.0.1", 00:17:31.503 "trsvcid": "47962" 00:17:31.503 }, 00:17:31.503 "auth": { 00:17:31.503 "state": "completed", 00:17:31.503 "digest": "sha384", 00:17:31.503 "dhgroup": "ffdhe3072" 00:17:31.503 } 00:17:31.503 } 00:17:31.503 ]' 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.503 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.763 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:31.763 19:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:32.334 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.595 19:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.856 00:17:32.856 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.856 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.856 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.118 { 00:17:33.118 "cntlid": 71, 00:17:33.118 "qid": 0, 00:17:33.118 "state": "enabled", 00:17:33.118 "thread": "nvmf_tgt_poll_group_000", 00:17:33.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.118 "listen_address": { 00:17:33.118 "trtype": "TCP", 00:17:33.118 "adrfam": "IPv4", 00:17:33.118 "traddr": "10.0.0.2", 00:17:33.118 "trsvcid": "4420" 00:17:33.118 }, 00:17:33.118 "peer_address": { 00:17:33.118 "trtype": "TCP", 00:17:33.118 "adrfam": "IPv4", 00:17:33.118 "traddr": "10.0.0.1", 00:17:33.118 "trsvcid": "47988" 00:17:33.118 }, 00:17:33.118 "auth": { 00:17:33.118 "state": "completed", 00:17:33.118 "digest": "sha384", 00:17:33.118 "dhgroup": "ffdhe3072" 00:17:33.118 } 00:17:33.118 } 00:17:33.118 ]' 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.118 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.379 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.379 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.379 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.379 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:33.379 19:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:33.950 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.950 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.950 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.950 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.210 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.211 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.211 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.472 00:17:34.472 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.472 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.472 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.732 { 00:17:34.732 "cntlid": 73, 00:17:34.732 "qid": 0, 00:17:34.732 "state": "enabled", 00:17:34.732 "thread": "nvmf_tgt_poll_group_000", 00:17:34.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.732 "listen_address": { 00:17:34.732 "trtype": "TCP", 00:17:34.732 "adrfam": "IPv4", 00:17:34.732 "traddr": "10.0.0.2", 00:17:34.732 "trsvcid": "4420" 00:17:34.732 }, 00:17:34.732 "peer_address": { 00:17:34.732 "trtype": "TCP", 00:17:34.732 "adrfam": "IPv4", 00:17:34.732 "traddr": "10.0.0.1", 00:17:34.732 "trsvcid": "48000" 00:17:34.732 }, 00:17:34.732 "auth": { 00:17:34.732 "state": "completed", 00:17:34.732 "digest": "sha384", 00:17:34.732 "dhgroup": "ffdhe4096" 00:17:34.732 } 00:17:34.732 } 00:17:34.732 ]' 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.732 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.994 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.994 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.994 19:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.994 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:34.994 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.935 19:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.935 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.196 00:17:36.196 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.196 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.196 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.456 { 00:17:36.456 "cntlid": 75, 00:17:36.456 "qid": 0, 00:17:36.456 "state": "enabled", 00:17:36.456 "thread": "nvmf_tgt_poll_group_000", 00:17:36.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.456 "listen_address": { 00:17:36.456 "trtype": "TCP", 00:17:36.456 "adrfam": "IPv4", 00:17:36.456 "traddr": "10.0.0.2", 00:17:36.456 "trsvcid": "4420" 00:17:36.456 }, 00:17:36.456 "peer_address": { 00:17:36.456 "trtype": "TCP", 00:17:36.456 "adrfam": "IPv4", 00:17:36.456 "traddr": "10.0.0.1", 00:17:36.456 "trsvcid": "48032" 00:17:36.456 }, 00:17:36.456 "auth": { 00:17:36.456 "state": "completed", 00:17:36.456 "digest": "sha384", 00:17:36.456 "dhgroup": "ffdhe4096" 00:17:36.456 } 00:17:36.456 } 00:17:36.456 ]' 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.456 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.716 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:36.716 19:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.287 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.548 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.809 00:17:37.809 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.809 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.809 19:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.070 { 00:17:38.070 "cntlid": 77, 00:17:38.070 "qid": 0, 00:17:38.070 "state": "enabled", 00:17:38.070 "thread": "nvmf_tgt_poll_group_000", 00:17:38.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.070 "listen_address": { 00:17:38.070 "trtype": "TCP", 00:17:38.070 "adrfam": "IPv4", 00:17:38.070 "traddr": "10.0.0.2", 00:17:38.070 "trsvcid": "4420" 00:17:38.070 }, 00:17:38.070 "peer_address": { 00:17:38.070 "trtype": "TCP", 00:17:38.070 "adrfam": "IPv4", 00:17:38.070 "traddr": "10.0.0.1", 00:17:38.070 "trsvcid": "35816" 00:17:38.070 }, 00:17:38.070 "auth": { 00:17:38.070 "state": "completed", 00:17:38.070 "digest": "sha384", 00:17:38.070 "dhgroup": "ffdhe4096" 00:17:38.070 } 00:17:38.070 } 00:17:38.070 ]' 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.070 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.332 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:38.332 19:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.904 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.164 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.165 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.425 00:17:39.425 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.425 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.425 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.686 { 00:17:39.686 "cntlid": 79, 00:17:39.686 "qid": 0, 00:17:39.686 "state": "enabled", 00:17:39.686 "thread": "nvmf_tgt_poll_group_000", 00:17:39.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.686 "listen_address": { 00:17:39.686 "trtype": "TCP", 00:17:39.686 "adrfam": "IPv4", 00:17:39.686 "traddr": "10.0.0.2", 00:17:39.686 "trsvcid": "4420" 00:17:39.686 }, 00:17:39.686 "peer_address": { 00:17:39.686 "trtype": "TCP", 00:17:39.686 "adrfam": "IPv4", 00:17:39.686 "traddr": "10.0.0.1", 00:17:39.686 "trsvcid": "35848" 00:17:39.686 }, 00:17:39.686 "auth": { 00:17:39.686 "state": "completed", 00:17:39.686 "digest": "sha384", 00:17:39.686 "dhgroup": "ffdhe4096" 00:17:39.686 } 00:17:39.686 } 00:17:39.686 ]' 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.686 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.948 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.948 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.948 19:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.948 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:39.948 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:40.519 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.780 19:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.353 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.354 { 00:17:41.354 "cntlid": 81, 00:17:41.354 "qid": 0, 00:17:41.354 "state": "enabled", 00:17:41.354 "thread": "nvmf_tgt_poll_group_000", 00:17:41.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.354 "listen_address": { 00:17:41.354 "trtype": "TCP", 00:17:41.354 "adrfam": "IPv4", 00:17:41.354 "traddr": "10.0.0.2", 00:17:41.354 "trsvcid": "4420" 00:17:41.354 }, 00:17:41.354 "peer_address": { 00:17:41.354 "trtype": "TCP", 00:17:41.354 "adrfam": "IPv4", 00:17:41.354 "traddr": "10.0.0.1", 00:17:41.354 "trsvcid": "35874" 00:17:41.354 }, 00:17:41.354 "auth": { 00:17:41.354 "state": "completed", 00:17:41.354 "digest": "sha384", 00:17:41.354 "dhgroup": "ffdhe6144" 00:17:41.354 } 00:17:41.354 } 00:17:41.354 ]' 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.354 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.616 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.616 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.616 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.616 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:41.616 19:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.560 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.822 00:17:42.822 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.822 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.822 19:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.084 { 00:17:43.084 "cntlid": 83, 00:17:43.084 "qid": 0, 00:17:43.084 "state": "enabled", 00:17:43.084 "thread": "nvmf_tgt_poll_group_000", 00:17:43.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.084 "listen_address": { 00:17:43.084 "trtype": "TCP", 00:17:43.084 "adrfam": "IPv4", 00:17:43.084 "traddr": "10.0.0.2", 00:17:43.084 "trsvcid": "4420" 00:17:43.084 }, 00:17:43.084 "peer_address": { 00:17:43.084 "trtype": "TCP", 00:17:43.084 "adrfam": "IPv4", 00:17:43.084 "traddr": "10.0.0.1", 00:17:43.084 "trsvcid": "35910" 00:17:43.084 }, 00:17:43.084 "auth": { 00:17:43.084 "state": "completed", 00:17:43.084 "digest": "sha384", 00:17:43.084 "dhgroup": "ffdhe6144" 00:17:43.084 } 00:17:43.084 } 00:17:43.084 ]' 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.084 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.346 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.346 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.346 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.346 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:43.346 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:43.918 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.180 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.442 00:17:44.702 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.702 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.702 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.702 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.703 { 00:17:44.703 "cntlid": 85, 00:17:44.703 "qid": 0, 00:17:44.703 "state": "enabled", 00:17:44.703 "thread": "nvmf_tgt_poll_group_000", 00:17:44.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.703 "listen_address": { 00:17:44.703 "trtype": "TCP", 00:17:44.703 "adrfam": "IPv4", 00:17:44.703 "traddr": "10.0.0.2", 00:17:44.703 "trsvcid": "4420" 00:17:44.703 }, 00:17:44.703 "peer_address": { 00:17:44.703 "trtype": "TCP", 00:17:44.703 "adrfam": "IPv4", 00:17:44.703 "traddr": "10.0.0.1", 00:17:44.703 "trsvcid": "35938" 00:17:44.703 }, 00:17:44.703 "auth": { 00:17:44.703 "state": "completed", 00:17:44.703 "digest": "sha384", 00:17:44.703 "dhgroup": "ffdhe6144" 00:17:44.703 } 00:17:44.703 } 00:17:44.703 ]' 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.703 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.963 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.963 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.963 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.963 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.963 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.963 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:44.963 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.908 19:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.908 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.169 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.430 { 00:17:46.430 "cntlid": 87, 00:17:46.430 "qid": 0, 00:17:46.430 "state": "enabled", 00:17:46.430 "thread": "nvmf_tgt_poll_group_000", 00:17:46.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.430 "listen_address": { 00:17:46.430 "trtype": "TCP", 00:17:46.430 "adrfam": "IPv4", 00:17:46.430 "traddr": "10.0.0.2", 00:17:46.430 "trsvcid": "4420" 00:17:46.430 }, 00:17:46.430 "peer_address": { 00:17:46.430 "trtype": "TCP", 00:17:46.430 "adrfam": "IPv4", 00:17:46.430 "traddr": "10.0.0.1", 00:17:46.430 "trsvcid": "35964" 00:17:46.430 }, 00:17:46.430 "auth": { 00:17:46.430 "state": "completed", 00:17:46.430 "digest": "sha384", 00:17:46.430 "dhgroup": "ffdhe6144" 00:17:46.430 } 00:17:46.430 } 00:17:46.430 ]' 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.430 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:46.692 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.659 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.311 00:17:48.311 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.311 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.312 { 00:17:48.312 "cntlid": 89, 00:17:48.312 "qid": 0, 00:17:48.312 "state": "enabled", 00:17:48.312 "thread": "nvmf_tgt_poll_group_000", 00:17:48.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.312 "listen_address": { 00:17:48.312 "trtype": "TCP", 00:17:48.312 "adrfam": "IPv4", 00:17:48.312 "traddr": "10.0.0.2", 00:17:48.312 "trsvcid": "4420" 00:17:48.312 }, 00:17:48.312 "peer_address": { 00:17:48.312 "trtype": "TCP", 00:17:48.312 "adrfam": "IPv4", 00:17:48.312 "traddr": "10.0.0.1", 00:17:48.312 "trsvcid": "42674" 00:17:48.312 }, 00:17:48.312 "auth": { 00:17:48.312 "state": "completed", 00:17:48.312 "digest": "sha384", 00:17:48.312 "dhgroup": "ffdhe8192" 00:17:48.312 } 00:17:48.312 } 00:17:48.312 ]' 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.312 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.604 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.604 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.604 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.604 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:48.604 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.192 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.453 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.025 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.025 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.286 { 00:17:50.286 "cntlid": 91, 00:17:50.286 "qid": 0, 00:17:50.286 "state": "enabled", 00:17:50.286 "thread": "nvmf_tgt_poll_group_000", 00:17:50.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.286 "listen_address": { 00:17:50.286 "trtype": "TCP", 00:17:50.286 "adrfam": "IPv4", 00:17:50.286 "traddr": "10.0.0.2", 00:17:50.286 "trsvcid": "4420" 00:17:50.286 }, 00:17:50.286 "peer_address": { 00:17:50.286 "trtype": "TCP", 00:17:50.286 "adrfam": "IPv4", 00:17:50.286 "traddr": "10.0.0.1", 00:17:50.286 "trsvcid": "42698" 00:17:50.286 }, 00:17:50.286 "auth": { 00:17:50.286 "state": "completed", 00:17:50.286 "digest": "sha384", 00:17:50.286 "dhgroup": "ffdhe8192" 00:17:50.286 } 00:17:50.286 } 00:17:50.286 ]' 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.286 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.547 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:50.547 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.119 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.380 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.953 00:17:51.953 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.953 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.953 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.953 { 00:17:51.953 "cntlid": 93, 00:17:51.953 "qid": 0, 00:17:51.953 "state": "enabled", 00:17:51.953 "thread": "nvmf_tgt_poll_group_000", 00:17:51.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.953 "listen_address": { 00:17:51.953 "trtype": "TCP", 00:17:51.953 "adrfam": "IPv4", 00:17:51.953 "traddr": "10.0.0.2", 00:17:51.953 "trsvcid": "4420" 00:17:51.953 }, 00:17:51.953 "peer_address": { 00:17:51.953 "trtype": "TCP", 00:17:51.953 "adrfam": "IPv4", 00:17:51.953 "traddr": "10.0.0.1", 00:17:51.953 "trsvcid": "42718" 00:17:51.953 }, 00:17:51.953 "auth": { 00:17:51.953 "state": "completed", 00:17:51.953 "digest": "sha384", 00:17:51.953 "dhgroup": "ffdhe8192" 00:17:51.953 } 00:17:51.953 } 00:17:51.953 ]' 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.953 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:52.214 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.158 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.736 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.736 { 00:17:53.736 "cntlid": 95, 00:17:53.736 "qid": 0, 00:17:53.736 "state": "enabled", 00:17:53.736 "thread": "nvmf_tgt_poll_group_000", 00:17:53.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.736 "listen_address": { 00:17:53.736 "trtype": "TCP", 00:17:53.736 "adrfam": "IPv4", 00:17:53.736 "traddr": "10.0.0.2", 00:17:53.736 "trsvcid": "4420" 00:17:53.736 }, 00:17:53.736 "peer_address": { 00:17:53.736 "trtype": "TCP", 00:17:53.736 "adrfam": "IPv4", 00:17:53.736 "traddr": "10.0.0.1", 00:17:53.736 "trsvcid": "42752" 00:17:53.736 }, 00:17:53.736 "auth": { 00:17:53.736 "state": "completed", 00:17:53.736 "digest": "sha384", 00:17:53.736 "dhgroup": "ffdhe8192" 00:17:53.736 } 00:17:53.736 } 00:17:53.736 ]' 00:17:53.736 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.997 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.997 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.997 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.998 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.998 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.998 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.998 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.258 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:54.258 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.831 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.093 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.354 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.354 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.354 { 00:17:55.354 "cntlid": 97, 00:17:55.354 "qid": 0, 00:17:55.354 "state": "enabled", 00:17:55.354 "thread": "nvmf_tgt_poll_group_000", 00:17:55.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.355 "listen_address": { 00:17:55.355 "trtype": "TCP", 00:17:55.355 "adrfam": "IPv4", 00:17:55.355 "traddr": "10.0.0.2", 00:17:55.355 "trsvcid": "4420" 00:17:55.355 }, 00:17:55.355 "peer_address": { 00:17:55.355 "trtype": "TCP", 00:17:55.355 "adrfam": "IPv4", 00:17:55.355 "traddr": "10.0.0.1", 00:17:55.355 "trsvcid": "42784" 00:17:55.355 }, 00:17:55.355 "auth": { 00:17:55.355 "state": "completed", 00:17:55.355 "digest": "sha512", 00:17:55.355 "dhgroup": "null" 00:17:55.355 } 00:17:55.355 } 00:17:55.355 ]' 00:17:55.355 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.617 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.878 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:55.878 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.450 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.713 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.713 00:17:56.974 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.974 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.974 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.974 { 00:17:56.974 "cntlid": 99, 00:17:56.974 "qid": 0, 00:17:56.974 "state": "enabled", 00:17:56.974 "thread": "nvmf_tgt_poll_group_000", 00:17:56.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.974 "listen_address": { 00:17:56.974 "trtype": "TCP", 00:17:56.974 "adrfam": "IPv4", 00:17:56.974 "traddr": "10.0.0.2", 00:17:56.974 "trsvcid": "4420" 00:17:56.974 }, 00:17:56.974 "peer_address": { 00:17:56.974 "trtype": "TCP", 00:17:56.974 "adrfam": "IPv4", 00:17:56.974 "traddr": "10.0.0.1", 00:17:56.974 "trsvcid": "42806" 00:17:56.974 }, 00:17:56.974 "auth": { 00:17:56.974 "state": "completed", 00:17:56.974 "digest": "sha512", 00:17:56.974 "dhgroup": "null" 00:17:56.974 } 00:17:56.974 } 00:17:56.974 ]' 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.974 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:57.236 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.179 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.440 00:17:58.440 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.440 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.440 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.701 { 00:17:58.701 "cntlid": 101, 00:17:58.701 "qid": 0, 00:17:58.701 "state": "enabled", 00:17:58.701 "thread": "nvmf_tgt_poll_group_000", 00:17:58.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.701 "listen_address": { 00:17:58.701 "trtype": "TCP", 00:17:58.701 "adrfam": "IPv4", 00:17:58.701 "traddr": "10.0.0.2", 00:17:58.701 "trsvcid": "4420" 00:17:58.701 }, 00:17:58.701 "peer_address": { 00:17:58.701 "trtype": "TCP", 00:17:58.701 "adrfam": "IPv4", 00:17:58.701 "traddr": "10.0.0.1", 00:17:58.701 "trsvcid": "48114" 00:17:58.701 }, 00:17:58.701 "auth": { 00:17:58.701 "state": "completed", 00:17:58.701 "digest": "sha512", 00:17:58.701 "dhgroup": "null" 00:17:58.701 } 00:17:58.701 } 00:17:58.701 ]' 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.701 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.961 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:58.962 19:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.532 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.793 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.053 00:18:00.053 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.053 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.053 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.053 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.315 { 00:18:00.315 "cntlid": 103, 00:18:00.315 "qid": 0, 00:18:00.315 "state": "enabled", 00:18:00.315 "thread": "nvmf_tgt_poll_group_000", 00:18:00.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.315 "listen_address": { 00:18:00.315 "trtype": "TCP", 00:18:00.315 "adrfam": "IPv4", 00:18:00.315 "traddr": "10.0.0.2", 00:18:00.315 "trsvcid": "4420" 00:18:00.315 }, 00:18:00.315 "peer_address": { 00:18:00.315 "trtype": "TCP", 00:18:00.315 "adrfam": "IPv4", 00:18:00.315 "traddr": "10.0.0.1", 00:18:00.315 "trsvcid": "48126" 00:18:00.315 }, 00:18:00.315 "auth": { 00:18:00.315 "state": "completed", 00:18:00.315 "digest": "sha512", 00:18:00.315 "dhgroup": "null" 00:18:00.315 } 00:18:00.315 } 00:18:00.315 ]' 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.315 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.577 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:00.577 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:01.147 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.148 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.408 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.669 00:18:01.669 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.669 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.669 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.931 { 00:18:01.931 "cntlid": 105, 00:18:01.931 "qid": 0, 00:18:01.931 "state": "enabled", 00:18:01.931 "thread": "nvmf_tgt_poll_group_000", 00:18:01.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.931 "listen_address": { 00:18:01.931 "trtype": "TCP", 00:18:01.931 "adrfam": "IPv4", 00:18:01.931 "traddr": "10.0.0.2", 00:18:01.931 "trsvcid": "4420" 00:18:01.931 }, 00:18:01.931 "peer_address": { 00:18:01.931 "trtype": "TCP", 00:18:01.931 "adrfam": "IPv4", 00:18:01.931 "traddr": "10.0.0.1", 00:18:01.931 "trsvcid": "48160" 00:18:01.931 }, 00:18:01.931 "auth": { 00:18:01.931 "state": "completed", 00:18:01.931 "digest": "sha512", 00:18:01.931 "dhgroup": "ffdhe2048" 00:18:01.931 } 00:18:01.931 } 00:18:01.931 ]' 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.931 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.931 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.931 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.931 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:02.197 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.768 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.028 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.288 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.289 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.550 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.550 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.550 { 00:18:03.550 "cntlid": 107, 00:18:03.550 "qid": 0, 00:18:03.550 "state": "enabled", 00:18:03.550 "thread": "nvmf_tgt_poll_group_000", 00:18:03.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.550 "listen_address": { 00:18:03.550 "trtype": "TCP", 00:18:03.550 "adrfam": "IPv4", 00:18:03.550 "traddr": "10.0.0.2", 00:18:03.551 "trsvcid": "4420" 00:18:03.551 }, 00:18:03.551 "peer_address": { 00:18:03.551 "trtype": "TCP", 00:18:03.551 "adrfam": "IPv4", 00:18:03.551 "traddr": "10.0.0.1", 00:18:03.551 "trsvcid": "48170" 00:18:03.551 }, 00:18:03.551 "auth": { 00:18:03.551 "state": "completed", 00:18:03.551 "digest": "sha512", 00:18:03.551 "dhgroup": "ffdhe2048" 00:18:03.551 } 00:18:03.551 } 00:18:03.551 ]' 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.551 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.813 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:03.813 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:04.383 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.384 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.644 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:04.644 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.644 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.644 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.644 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.645 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.905 00:18:04.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.905 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.905 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.905 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.905 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.905 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.167 { 00:18:05.167 "cntlid": 109, 00:18:05.167 "qid": 0, 00:18:05.167 "state": "enabled", 00:18:05.167 "thread": "nvmf_tgt_poll_group_000", 00:18:05.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.167 "listen_address": { 00:18:05.167 "trtype": "TCP", 00:18:05.167 "adrfam": "IPv4", 00:18:05.167 "traddr": "10.0.0.2", 00:18:05.167 "trsvcid": "4420" 00:18:05.167 }, 00:18:05.167 "peer_address": { 00:18:05.167 "trtype": "TCP", 00:18:05.167 "adrfam": "IPv4", 00:18:05.167 "traddr": "10.0.0.1", 00:18:05.167 "trsvcid": "48192" 00:18:05.167 }, 00:18:05.167 "auth": { 00:18:05.167 "state": "completed", 00:18:05.167 "digest": "sha512", 00:18:05.167 "dhgroup": "ffdhe2048" 00:18:05.167 } 00:18:05.167 } 00:18:05.167 ]' 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.167 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.428 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:05.428 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.000 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.262 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.524 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.524 { 00:18:06.524 "cntlid": 111, 00:18:06.524 "qid": 0, 00:18:06.524 "state": "enabled", 00:18:06.524 "thread": "nvmf_tgt_poll_group_000", 00:18:06.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.524 "listen_address": { 00:18:06.524 "trtype": "TCP", 00:18:06.524 "adrfam": "IPv4", 00:18:06.524 "traddr": "10.0.0.2", 00:18:06.524 "trsvcid": "4420" 00:18:06.524 }, 00:18:06.524 "peer_address": { 00:18:06.524 "trtype": "TCP", 00:18:06.524 "adrfam": "IPv4", 00:18:06.524 "traddr": "10.0.0.1", 00:18:06.524 "trsvcid": "48216" 00:18:06.524 }, 00:18:06.524 "auth": { 00:18:06.524 "state": "completed", 00:18:06.524 "digest": "sha512", 00:18:06.524 "dhgroup": "ffdhe2048" 00:18:06.524 } 00:18:06.524 } 00:18:06.524 ]' 00:18:06.524 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.785 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.046 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:07.046 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.617 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.877 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:07.877 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.878 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.138 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.138 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.138 { 00:18:08.138 "cntlid": 113, 00:18:08.138 "qid": 0, 00:18:08.138 "state": "enabled", 00:18:08.138 "thread": "nvmf_tgt_poll_group_000", 00:18:08.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.138 "listen_address": { 00:18:08.138 "trtype": "TCP", 00:18:08.138 "adrfam": "IPv4", 00:18:08.138 "traddr": "10.0.0.2", 00:18:08.138 "trsvcid": "4420" 00:18:08.138 }, 00:18:08.138 "peer_address": { 00:18:08.138 "trtype": "TCP", 00:18:08.138 "adrfam": "IPv4", 00:18:08.138 "traddr": "10.0.0.1", 00:18:08.138 "trsvcid": "48300" 00:18:08.138 }, 00:18:08.138 "auth": { 00:18:08.138 "state": "completed", 00:18:08.138 "digest": "sha512", 00:18:08.138 "dhgroup": "ffdhe3072" 00:18:08.139 } 00:18:08.139 } 00:18:08.139 ]' 00:18:08.139 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.400 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.661 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:08.661 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.232 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.492 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.493 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.493 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.493 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.493 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.753 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.753 { 00:18:09.753 "cntlid": 115, 00:18:09.753 "qid": 0, 00:18:09.753 "state": "enabled", 00:18:09.754 "thread": "nvmf_tgt_poll_group_000", 00:18:09.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.754 "listen_address": { 00:18:09.754 "trtype": "TCP", 00:18:09.754 "adrfam": "IPv4", 00:18:09.754 "traddr": "10.0.0.2", 00:18:09.754 "trsvcid": "4420" 00:18:09.754 }, 00:18:09.754 "peer_address": { 00:18:09.754 "trtype": "TCP", 00:18:09.754 "adrfam": "IPv4", 00:18:09.754 "traddr": "10.0.0.1", 00:18:09.754 "trsvcid": "48338" 00:18:09.754 }, 00:18:09.754 "auth": { 00:18:09.754 "state": "completed", 00:18:09.754 "digest": "sha512", 00:18:09.754 "dhgroup": "ffdhe3072" 00:18:09.754 } 00:18:09.754 } 00:18:09.754 ]' 00:18:09.754 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.015 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.015 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.015 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.015 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.015 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.015 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.015 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.276 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:10.276 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.846 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.847 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.108 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.368 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.368 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.629 { 00:18:11.629 "cntlid": 117, 00:18:11.629 "qid": 0, 00:18:11.629 "state": "enabled", 00:18:11.629 "thread": "nvmf_tgt_poll_group_000", 00:18:11.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.629 "listen_address": { 00:18:11.629 "trtype": "TCP", 00:18:11.629 "adrfam": "IPv4", 00:18:11.629 "traddr": "10.0.0.2", 00:18:11.629 "trsvcid": "4420" 00:18:11.629 }, 00:18:11.629 "peer_address": { 00:18:11.629 "trtype": "TCP", 00:18:11.629 "adrfam": "IPv4", 00:18:11.629 "traddr": "10.0.0.1", 00:18:11.629 "trsvcid": "48362" 00:18:11.629 }, 00:18:11.629 "auth": { 00:18:11.629 "state": "completed", 00:18:11.629 "digest": "sha512", 00:18:11.629 "dhgroup": "ffdhe3072" 00:18:11.629 } 00:18:11.629 } 00:18:11.629 ]' 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.629 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.889 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:11.889 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.458 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.718 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.977 00:18:12.977 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.977 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.977 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.236 { 00:18:13.236 "cntlid": 119, 00:18:13.236 "qid": 0, 00:18:13.236 "state": "enabled", 00:18:13.236 "thread": "nvmf_tgt_poll_group_000", 00:18:13.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.236 "listen_address": { 00:18:13.236 "trtype": "TCP", 00:18:13.236 "adrfam": "IPv4", 00:18:13.236 "traddr": "10.0.0.2", 00:18:13.236 "trsvcid": "4420" 00:18:13.236 }, 00:18:13.236 "peer_address": { 00:18:13.236 "trtype": "TCP", 00:18:13.236 "adrfam": "IPv4", 00:18:13.236 "traddr": "10.0.0.1", 00:18:13.236 "trsvcid": "48378" 00:18:13.236 }, 00:18:13.236 "auth": { 00:18:13.236 "state": "completed", 00:18:13.236 "digest": "sha512", 00:18:13.236 "dhgroup": "ffdhe3072" 00:18:13.236 } 00:18:13.236 } 00:18:13.236 ]' 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.236 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.494 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:13.494 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.062 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.322 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.583 00:18:14.583 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.583 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.583 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.843 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.843 { 00:18:14.843 "cntlid": 121, 00:18:14.844 "qid": 0, 00:18:14.844 "state": "enabled", 00:18:14.844 "thread": "nvmf_tgt_poll_group_000", 00:18:14.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.844 "listen_address": { 00:18:14.844 "trtype": "TCP", 00:18:14.844 "adrfam": "IPv4", 00:18:14.844 "traddr": "10.0.0.2", 00:18:14.844 "trsvcid": "4420" 00:18:14.844 }, 00:18:14.844 "peer_address": { 00:18:14.844 "trtype": "TCP", 00:18:14.844 "adrfam": "IPv4", 00:18:14.844 "traddr": "10.0.0.1", 00:18:14.844 "trsvcid": "48408" 00:18:14.844 }, 00:18:14.844 "auth": { 00:18:14.844 "state": "completed", 00:18:14.844 "digest": "sha512", 00:18:14.844 "dhgroup": "ffdhe4096" 00:18:14.844 } 00:18:14.844 } 00:18:14.844 ]' 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.844 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.104 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:15.104 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.673 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.934 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.193 00:18:16.193 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.193 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.193 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.454 { 00:18:16.454 "cntlid": 123, 00:18:16.454 "qid": 0, 00:18:16.454 "state": "enabled", 00:18:16.454 "thread": "nvmf_tgt_poll_group_000", 00:18:16.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.454 "listen_address": { 00:18:16.454 "trtype": "TCP", 00:18:16.454 "adrfam": "IPv4", 00:18:16.454 "traddr": "10.0.0.2", 00:18:16.454 "trsvcid": "4420" 00:18:16.454 }, 00:18:16.454 "peer_address": { 00:18:16.454 "trtype": "TCP", 00:18:16.454 "adrfam": "IPv4", 00:18:16.454 "traddr": "10.0.0.1", 00:18:16.454 "trsvcid": "48432" 00:18:16.454 }, 00:18:16.454 "auth": { 00:18:16.454 "state": "completed", 00:18:16.454 "digest": "sha512", 00:18:16.454 "dhgroup": "ffdhe4096" 00:18:16.454 } 00:18:16.454 } 00:18:16.454 ]' 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.454 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.715 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:16.715 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.284 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.543 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.803 00:18:17.803 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.803 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.803 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.068 { 00:18:18.068 "cntlid": 125, 00:18:18.068 "qid": 0, 00:18:18.068 "state": "enabled", 00:18:18.068 "thread": "nvmf_tgt_poll_group_000", 00:18:18.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.068 "listen_address": { 00:18:18.068 "trtype": "TCP", 00:18:18.068 "adrfam": "IPv4", 00:18:18.068 "traddr": "10.0.0.2", 00:18:18.068 "trsvcid": "4420" 00:18:18.068 }, 00:18:18.068 "peer_address": { 00:18:18.068 "trtype": "TCP", 00:18:18.068 "adrfam": "IPv4", 00:18:18.068 "traddr": "10.0.0.1", 00:18:18.068 "trsvcid": "33456" 00:18:18.068 }, 00:18:18.068 "auth": { 00:18:18.068 "state": "completed", 00:18:18.068 "digest": "sha512", 00:18:18.068 "dhgroup": "ffdhe4096" 00:18:18.068 } 00:18:18.068 } 00:18:18.068 ]' 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.068 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.369 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:18.369 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:19.037 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.038 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.298 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.560 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.560 { 00:18:19.560 "cntlid": 127, 00:18:19.560 "qid": 0, 00:18:19.560 "state": "enabled", 00:18:19.560 "thread": "nvmf_tgt_poll_group_000", 00:18:19.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.560 "listen_address": { 00:18:19.560 "trtype": "TCP", 00:18:19.560 "adrfam": "IPv4", 00:18:19.560 "traddr": "10.0.0.2", 00:18:19.560 "trsvcid": "4420" 00:18:19.560 }, 00:18:19.560 "peer_address": { 00:18:19.560 "trtype": "TCP", 00:18:19.560 "adrfam": "IPv4", 00:18:19.560 "traddr": "10.0.0.1", 00:18:19.560 "trsvcid": "33496" 00:18:19.560 }, 00:18:19.560 "auth": { 00:18:19.560 "state": "completed", 00:18:19.560 "digest": "sha512", 00:18:19.560 "dhgroup": "ffdhe4096" 00:18:19.560 } 00:18:19.560 } 00:18:19.560 ]' 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.560 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.868 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.868 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:19.868 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.810 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.811 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.811 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.811 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.072 00:18:21.072 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.072 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.072 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.333 { 00:18:21.333 "cntlid": 129, 00:18:21.333 "qid": 0, 00:18:21.333 "state": "enabled", 00:18:21.333 "thread": "nvmf_tgt_poll_group_000", 00:18:21.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.333 "listen_address": { 00:18:21.333 "trtype": "TCP", 00:18:21.333 "adrfam": "IPv4", 00:18:21.333 "traddr": "10.0.0.2", 00:18:21.333 "trsvcid": "4420" 00:18:21.333 }, 00:18:21.333 "peer_address": { 00:18:21.333 "trtype": "TCP", 00:18:21.333 "adrfam": "IPv4", 00:18:21.333 "traddr": "10.0.0.1", 00:18:21.333 "trsvcid": "33520" 00:18:21.333 }, 00:18:21.333 "auth": { 00:18:21.333 "state": "completed", 00:18:21.333 "digest": "sha512", 00:18:21.333 "dhgroup": "ffdhe6144" 00:18:21.333 } 00:18:21.333 } 00:18:21.333 ]' 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.333 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.595 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.595 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.595 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.595 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:21.595 19:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:22.543 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.544 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.805 00:18:22.805 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.805 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.805 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.067 { 00:18:23.067 "cntlid": 131, 00:18:23.067 "qid": 0, 00:18:23.067 "state": "enabled", 00:18:23.067 "thread": "nvmf_tgt_poll_group_000", 00:18:23.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.067 "listen_address": { 00:18:23.067 "trtype": "TCP", 00:18:23.067 "adrfam": "IPv4", 00:18:23.067 "traddr": "10.0.0.2", 00:18:23.067 "trsvcid": "4420" 00:18:23.067 }, 00:18:23.067 "peer_address": { 00:18:23.067 "trtype": "TCP", 00:18:23.067 "adrfam": "IPv4", 00:18:23.067 "traddr": "10.0.0.1", 00:18:23.067 "trsvcid": "33542" 00:18:23.067 }, 00:18:23.067 "auth": { 00:18:23.067 "state": "completed", 00:18:23.067 "digest": "sha512", 00:18:23.067 "dhgroup": "ffdhe6144" 00:18:23.067 } 00:18:23.067 } 00:18:23.067 ]' 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.067 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.327 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.327 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.327 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.327 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:23.327 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:23.897 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.158 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.729 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.729 { 00:18:24.729 "cntlid": 133, 00:18:24.729 "qid": 0, 00:18:24.729 "state": "enabled", 00:18:24.729 "thread": "nvmf_tgt_poll_group_000", 00:18:24.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.729 "listen_address": { 00:18:24.729 "trtype": "TCP", 00:18:24.729 "adrfam": "IPv4", 00:18:24.729 "traddr": "10.0.0.2", 00:18:24.729 "trsvcid": "4420" 00:18:24.729 }, 00:18:24.729 "peer_address": { 00:18:24.729 "trtype": "TCP", 00:18:24.729 "adrfam": "IPv4", 00:18:24.729 "traddr": "10.0.0.1", 00:18:24.729 "trsvcid": "33582" 00:18:24.729 }, 00:18:24.729 "auth": { 00:18:24.729 "state": "completed", 00:18:24.729 "digest": "sha512", 00:18:24.729 "dhgroup": "ffdhe6144" 00:18:24.729 } 00:18:24.729 } 00:18:24.729 ]' 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.729 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.993 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.993 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.993 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.993 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.993 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.993 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:24.993 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.934 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.934 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.196 00:18:26.196 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.196 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.196 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.456 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.456 { 00:18:26.456 "cntlid": 135, 00:18:26.456 "qid": 0, 00:18:26.456 "state": "enabled", 00:18:26.457 "thread": "nvmf_tgt_poll_group_000", 00:18:26.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.457 "listen_address": { 00:18:26.457 "trtype": "TCP", 00:18:26.457 "adrfam": "IPv4", 00:18:26.457 "traddr": "10.0.0.2", 00:18:26.457 "trsvcid": "4420" 00:18:26.457 }, 00:18:26.457 "peer_address": { 00:18:26.457 "trtype": "TCP", 00:18:26.457 "adrfam": "IPv4", 00:18:26.457 "traddr": "10.0.0.1", 00:18:26.457 "trsvcid": "33604" 00:18:26.457 }, 00:18:26.457 "auth": { 00:18:26.457 "state": "completed", 00:18:26.457 "digest": "sha512", 00:18:26.457 "dhgroup": "ffdhe6144" 00:18:26.457 } 00:18:26.457 } 00:18:26.457 ]' 00:18:26.457 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.457 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.457 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:26.718 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.659 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.660 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.660 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.660 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.660 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.660 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.231 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.231 { 00:18:28.231 "cntlid": 137, 00:18:28.231 "qid": 0, 00:18:28.231 "state": "enabled", 00:18:28.231 "thread": "nvmf_tgt_poll_group_000", 00:18:28.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.231 "listen_address": { 00:18:28.231 "trtype": "TCP", 00:18:28.231 "adrfam": "IPv4", 00:18:28.231 "traddr": "10.0.0.2", 00:18:28.231 "trsvcid": "4420" 00:18:28.231 }, 00:18:28.231 "peer_address": { 00:18:28.231 "trtype": "TCP", 00:18:28.231 "adrfam": "IPv4", 00:18:28.231 "traddr": "10.0.0.1", 00:18:28.231 "trsvcid": "34116" 00:18:28.231 }, 00:18:28.231 "auth": { 00:18:28.231 "state": "completed", 00:18:28.231 "digest": "sha512", 00:18:28.231 "dhgroup": "ffdhe8192" 00:18:28.231 } 00:18:28.231 } 00:18:28.231 ]' 00:18:28.231 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.492 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.752 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:28.753 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.324 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.584 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.845 00:18:30.106 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.106 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.106 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.106 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.106 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.107 { 00:18:30.107 "cntlid": 139, 00:18:30.107 "qid": 0, 00:18:30.107 "state": "enabled", 00:18:30.107 "thread": "nvmf_tgt_poll_group_000", 00:18:30.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.107 "listen_address": { 00:18:30.107 "trtype": "TCP", 00:18:30.107 "adrfam": "IPv4", 00:18:30.107 "traddr": "10.0.0.2", 00:18:30.107 "trsvcid": "4420" 00:18:30.107 }, 00:18:30.107 "peer_address": { 00:18:30.107 "trtype": "TCP", 00:18:30.107 "adrfam": "IPv4", 00:18:30.107 "traddr": "10.0.0.1", 00:18:30.107 "trsvcid": "34148" 00:18:30.107 }, 00:18:30.107 "auth": { 00:18:30.107 "state": "completed", 00:18:30.107 "digest": "sha512", 00:18:30.107 "dhgroup": "ffdhe8192" 00:18:30.107 } 00:18:30.107 } 00:18:30.107 ]' 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.107 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:30.367 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: --dhchap-ctrl-secret DHHC-1:02:MzI3NzI4YjRiYzQ1ZWRjMzcwM2Y2ODVjNDZhMTcwOTQxNjgzN2YzNWFkMzVhYjcwxm6ZNA==: 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.307 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.878 00:18:31.878 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.878 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.878 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.878 { 00:18:31.878 "cntlid": 141, 00:18:31.878 "qid": 0, 00:18:31.878 "state": "enabled", 00:18:31.878 "thread": "nvmf_tgt_poll_group_000", 00:18:31.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.878 "listen_address": { 00:18:31.878 "trtype": "TCP", 00:18:31.878 "adrfam": "IPv4", 00:18:31.878 "traddr": "10.0.0.2", 00:18:31.878 "trsvcid": "4420" 00:18:31.878 }, 00:18:31.878 "peer_address": { 00:18:31.878 "trtype": "TCP", 00:18:31.878 "adrfam": "IPv4", 00:18:31.878 "traddr": "10.0.0.1", 00:18:31.878 "trsvcid": "34172" 00:18:31.878 }, 00:18:31.878 "auth": { 00:18:31.878 "state": "completed", 00:18:31.878 "digest": "sha512", 00:18:31.878 "dhgroup": "ffdhe8192" 00:18:31.878 } 00:18:31.878 } 00:18:31.878 ]' 00:18:31.878 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.138 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.398 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:32.398 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:01:MmZlMmE3MDA5ZGZjYmU3NzM0NjAwNDJjNzllY2ViN2SNmBcN: 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.968 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.228 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:33.798 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.798 { 00:18:33.798 "cntlid": 143, 00:18:33.798 "qid": 0, 00:18:33.798 "state": "enabled", 00:18:33.798 "thread": "nvmf_tgt_poll_group_000", 00:18:33.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.798 "listen_address": { 00:18:33.798 "trtype": "TCP", 00:18:33.798 "adrfam": "IPv4", 00:18:33.798 "traddr": "10.0.0.2", 00:18:33.798 "trsvcid": "4420" 00:18:33.798 }, 00:18:33.798 "peer_address": { 00:18:33.798 "trtype": "TCP", 00:18:33.798 "adrfam": "IPv4", 00:18:33.798 "traddr": "10.0.0.1", 00:18:33.798 "trsvcid": "34206" 00:18:33.798 }, 00:18:33.798 "auth": { 00:18:33.798 "state": "completed", 00:18:33.798 "digest": "sha512", 00:18:33.798 "dhgroup": "ffdhe8192" 00:18:33.798 } 00:18:33.798 } 00:18:33.798 ]' 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.798 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.058 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.058 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.058 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.058 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.058 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.317 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:34.317 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:34.887 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:34.887 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.147 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.717 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.717 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.977 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.977 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.977 { 00:18:35.977 "cntlid": 145, 00:18:35.977 "qid": 0, 00:18:35.977 "state": "enabled", 00:18:35.977 "thread": "nvmf_tgt_poll_group_000", 00:18:35.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.977 "listen_address": { 00:18:35.977 "trtype": "TCP", 00:18:35.977 "adrfam": "IPv4", 00:18:35.977 "traddr": "10.0.0.2", 00:18:35.977 "trsvcid": "4420" 00:18:35.977 }, 00:18:35.977 "peer_address": { 00:18:35.977 "trtype": "TCP", 00:18:35.977 "adrfam": "IPv4", 00:18:35.977 "traddr": "10.0.0.1", 00:18:35.977 "trsvcid": "34236" 00:18:35.977 }, 00:18:35.977 "auth": { 00:18:35.977 "state": "completed", 00:18:35.977 "digest": "sha512", 00:18:35.977 "dhgroup": "ffdhe8192" 00:18:35.977 } 00:18:35.977 } 00:18:35.977 ]' 00:18:35.977 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.977 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.977 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.977 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.977 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.977 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.977 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.977 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.236 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:36.236 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZDdkOTQ3N2NmMmFhZDRjOWM3MzQ2YTMxZTJmZDY3YTA5ZDIyOTk0MGViNDQwMjc3A3oXQQ==: --dhchap-ctrl-secret DHHC-1:03:ZmY1ZGY5OTJkNTA1MDhiYTcyZWUxODVkMWQyYzI5YjYyOWU0ZmY0MDgwNzYwYjM1ZDYwYTlmMGVkZDQzNzYyNh9xvq0=: 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:36.806 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:36.807 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:37.377 request: 00:18:37.377 { 00:18:37.377 "name": "nvme0", 00:18:37.377 "trtype": "tcp", 00:18:37.377 "traddr": "10.0.0.2", 00:18:37.377 "adrfam": "ipv4", 00:18:37.377 "trsvcid": "4420", 00:18:37.377 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.377 "prchk_reftag": false, 00:18:37.377 "prchk_guard": false, 00:18:37.377 "hdgst": false, 00:18:37.377 "ddgst": false, 00:18:37.377 "dhchap_key": "key2", 00:18:37.377 "allow_unrecognized_csi": false, 00:18:37.377 "method": "bdev_nvme_attach_controller", 00:18:37.377 "req_id": 1 00:18:37.377 } 00:18:37.377 Got JSON-RPC error response 00:18:37.377 response: 00:18:37.377 { 00:18:37.377 "code": -5, 00:18:37.377 "message": "Input/output error" 00:18:37.377 } 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.377 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.378 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:37.637 request: 00:18:37.637 { 00:18:37.637 "name": "nvme0", 00:18:37.637 "trtype": "tcp", 00:18:37.637 "traddr": "10.0.0.2", 00:18:37.637 "adrfam": "ipv4", 00:18:37.637 "trsvcid": "4420", 00:18:37.637 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.637 "prchk_reftag": false, 00:18:37.637 "prchk_guard": false, 00:18:37.637 "hdgst": false, 00:18:37.637 "ddgst": false, 00:18:37.637 "dhchap_key": "key1", 00:18:37.637 "dhchap_ctrlr_key": "ckey2", 00:18:37.637 "allow_unrecognized_csi": false, 00:18:37.637 "method": "bdev_nvme_attach_controller", 00:18:37.637 "req_id": 1 00:18:37.637 } 00:18:37.637 Got JSON-RPC error response 00:18:37.637 response: 00:18:37.637 { 00:18:37.637 "code": -5, 00:18:37.637 "message": "Input/output error" 00:18:37.637 } 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.897 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.157 request: 00:18:38.157 { 00:18:38.157 "name": "nvme0", 00:18:38.157 "trtype": "tcp", 00:18:38.157 "traddr": "10.0.0.2", 00:18:38.157 "adrfam": "ipv4", 00:18:38.157 "trsvcid": "4420", 00:18:38.157 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.157 "prchk_reftag": false, 00:18:38.157 "prchk_guard": false, 00:18:38.157 "hdgst": false, 00:18:38.157 "ddgst": false, 00:18:38.157 "dhchap_key": "key1", 00:18:38.157 "dhchap_ctrlr_key": "ckey1", 00:18:38.157 "allow_unrecognized_csi": false, 00:18:38.157 "method": "bdev_nvme_attach_controller", 00:18:38.157 "req_id": 1 00:18:38.157 } 00:18:38.157 Got JSON-RPC error response 00:18:38.157 response: 00:18:38.157 { 00:18:38.157 "code": -5, 00:18:38.157 "message": "Input/output error" 00:18:38.157 } 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2912257 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2912257 ']' 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2912257 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.157 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2912257 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2912257' 00:18:38.417 killing process with pid 2912257 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2912257 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2912257 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2937991 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2937991 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2937991 ']' 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.417 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2937991 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2937991 ']' 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.357 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 null0 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MUn 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.aOq ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aOq 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tBr 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.P6b ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.P6b 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Vm 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.zYl ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zYl 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HAm 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.617 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.557 nvme0n1 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.557 { 00:18:40.557 "cntlid": 1, 00:18:40.557 "qid": 0, 00:18:40.557 "state": "enabled", 00:18:40.557 "thread": "nvmf_tgt_poll_group_000", 00:18:40.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.557 "listen_address": { 00:18:40.557 "trtype": "TCP", 00:18:40.557 "adrfam": "IPv4", 00:18:40.557 "traddr": "10.0.0.2", 00:18:40.557 "trsvcid": "4420" 00:18:40.557 }, 00:18:40.557 "peer_address": { 00:18:40.557 "trtype": "TCP", 00:18:40.557 "adrfam": "IPv4", 00:18:40.557 "traddr": "10.0.0.1", 00:18:40.557 "trsvcid": "55658" 00:18:40.557 }, 00:18:40.557 "auth": { 00:18:40.557 "state": "completed", 00:18:40.557 "digest": "sha512", 00:18:40.557 "dhgroup": "ffdhe8192" 00:18:40.557 } 00:18:40.557 } 00:18:40.557 ]' 00:18:40.557 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.818 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.078 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:41.078 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:41.648 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.909 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.909 request: 00:18:41.909 { 00:18:41.909 "name": "nvme0", 00:18:41.909 "trtype": "tcp", 00:18:41.909 "traddr": "10.0.0.2", 00:18:41.909 "adrfam": "ipv4", 00:18:41.909 "trsvcid": "4420", 00:18:41.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.909 "prchk_reftag": false, 00:18:41.909 "prchk_guard": false, 00:18:41.909 "hdgst": false, 00:18:41.909 "ddgst": false, 00:18:41.909 "dhchap_key": "key3", 00:18:41.909 "allow_unrecognized_csi": false, 00:18:41.909 "method": "bdev_nvme_attach_controller", 00:18:41.909 "req_id": 1 00:18:41.909 } 00:18:41.909 Got JSON-RPC error response 00:18:41.909 response: 00:18:41.909 { 00:18:41.909 "code": -5, 00:18:41.909 "message": "Input/output error" 00:18:41.909 } 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:41.909 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.168 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.428 request: 00:18:42.428 { 00:18:42.428 "name": "nvme0", 00:18:42.428 "trtype": "tcp", 00:18:42.428 "traddr": "10.0.0.2", 00:18:42.428 "adrfam": "ipv4", 00:18:42.428 "trsvcid": "4420", 00:18:42.428 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.428 "prchk_reftag": false, 00:18:42.428 "prchk_guard": false, 00:18:42.428 "hdgst": false, 00:18:42.428 "ddgst": false, 00:18:42.428 "dhchap_key": "key3", 00:18:42.428 "allow_unrecognized_csi": false, 00:18:42.428 "method": "bdev_nvme_attach_controller", 00:18:42.428 "req_id": 1 00:18:42.428 } 00:18:42.428 Got JSON-RPC error response 00:18:42.428 response: 00:18:42.428 { 00:18:42.428 "code": -5, 00:18:42.428 "message": "Input/output error" 00:18:42.428 } 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.428 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.999 request: 00:18:42.999 { 00:18:42.999 "name": "nvme0", 00:18:42.999 "trtype": "tcp", 00:18:42.999 "traddr": "10.0.0.2", 00:18:42.999 "adrfam": "ipv4", 00:18:42.999 "trsvcid": "4420", 00:18:42.999 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.999 "prchk_reftag": false, 00:18:42.999 "prchk_guard": false, 00:18:42.999 "hdgst": false, 00:18:42.999 "ddgst": false, 00:18:42.999 "dhchap_key": "key0", 00:18:42.999 "dhchap_ctrlr_key": "key1", 00:18:42.999 "allow_unrecognized_csi": false, 00:18:42.999 "method": "bdev_nvme_attach_controller", 00:18:42.999 "req_id": 1 00:18:42.999 } 00:18:42.999 Got JSON-RPC error response 00:18:42.999 response: 00:18:42.999 { 00:18:42.999 "code": -5, 00:18:42.999 "message": "Input/output error" 00:18:42.999 } 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:42.999 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:42.999 nvme0n1 00:18:42.999 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:42.999 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:42.999 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.261 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.261 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.262 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:43.565 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:44.134 nvme0n1 00:18:44.134 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:44.134 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:44.134 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.393 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:44.652 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.652 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:44.652 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: --dhchap-ctrl-secret DHHC-1:03:YWJhNDdhYmUwZTFjZGNiM2Y0NTBjMzQ0YWU4ZmEyYTU3NzA2NDYwYWEwZTNhNGE4OGMyOWU3NjkwYmQwMDg2ZA1WgUk=: 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.221 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:45.481 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.482 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:45.482 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.482 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.742 request: 00:18:45.742 { 00:18:45.742 "name": "nvme0", 00:18:45.742 "trtype": "tcp", 00:18:45.742 "traddr": "10.0.0.2", 00:18:45.742 "adrfam": "ipv4", 00:18:45.742 "trsvcid": "4420", 00:18:45.742 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.742 "prchk_reftag": false, 00:18:45.742 "prchk_guard": false, 00:18:45.742 "hdgst": false, 00:18:45.742 "ddgst": false, 00:18:45.742 "dhchap_key": "key1", 00:18:45.742 "allow_unrecognized_csi": false, 00:18:45.742 "method": "bdev_nvme_attach_controller", 00:18:45.742 "req_id": 1 00:18:45.742 } 00:18:45.742 Got JSON-RPC error response 00:18:45.742 response: 00:18:45.742 { 00:18:45.742 "code": -5, 00:18:45.742 "message": "Input/output error" 00:18:45.742 } 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.742 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.681 nvme0n1 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.681 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:46.941 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:47.200 nvme0n1 00:18:47.200 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:47.200 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:47.200 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.460 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.719 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.719 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: '' 2s 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: ]] 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTg2MjAyNzhlYTVjNzY5Y2Y5ZmIzYTcyYmM2NDU1OGMUvAVg: 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:47.720 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: 2s 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: ]] 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWEyYjIyYjZkNzk1MmE5MmQ4ZTQ0OTVjMjAyNjQyZmNjOGIyYmIxM2IyNDI3OTg0umJl9w==: 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:49.628 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:51.537 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.798 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.368 nvme0n1 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.368 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.937 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:52.937 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:52.937 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:53.198 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.458 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.458 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:53.458 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.458 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.458 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:53.459 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:54.030 request: 00:18:54.030 { 00:18:54.030 "name": "nvme0", 00:18:54.030 "dhchap_key": "key1", 00:18:54.030 "dhchap_ctrlr_key": "key3", 00:18:54.030 "method": "bdev_nvme_set_keys", 00:18:54.030 "req_id": 1 00:18:54.030 } 00:18:54.030 Got JSON-RPC error response 00:18:54.030 response: 00:18:54.030 { 00:18:54.030 "code": -13, 00:18:54.030 "message": "Permission denied" 00:18:54.030 } 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:54.030 19:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.413 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:55.986 nvme0n1 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:55.986 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:56.559 request: 00:18:56.559 { 00:18:56.559 "name": "nvme0", 00:18:56.559 "dhchap_key": "key2", 00:18:56.559 "dhchap_ctrlr_key": "key0", 00:18:56.559 "method": "bdev_nvme_set_keys", 00:18:56.559 "req_id": 1 00:18:56.559 } 00:18:56.559 Got JSON-RPC error response 00:18:56.559 response: 00:18:56.559 { 00:18:56.559 "code": -13, 00:18:56.559 "message": "Permission denied" 00:18:56.559 } 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:56.559 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.820 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:56.820 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:57.761 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:57.761 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:57.761 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2912437 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2912437 ']' 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2912437 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.022 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2912437 00:18:58.022 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.023 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.023 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2912437' 00:18:58.023 killing process with pid 2912437 00:18:58.023 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2912437 00:18:58.023 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2912437 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:58.283 rmmod nvme_tcp 00:18:58.283 rmmod nvme_fabrics 00:18:58.283 rmmod nvme_keyring 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2937991 ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2937991 ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937991' 00:18:58.283 killing process with pid 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2937991 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:58.283 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:58.545 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:58.545 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:58.545 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.545 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.545 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MUn /tmp/spdk.key-sha256.tBr /tmp/spdk.key-sha384.0Vm /tmp/spdk.key-sha512.HAm /tmp/spdk.key-sha512.aOq /tmp/spdk.key-sha384.P6b /tmp/spdk.key-sha256.zYl '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:00.457 00:19:00.457 real 2m37.092s 00:19:00.457 user 5m53.278s 00:19:00.457 sys 0m24.710s 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.457 ************************************ 00:19:00.457 END TEST nvmf_auth_target 00:19:00.457 ************************************ 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.457 ************************************ 00:19:00.457 START TEST nvmf_bdevio_no_huge 00:19:00.457 ************************************ 00:19:00.457 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:00.719 * Looking for test storage... 00:19:00.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.719 --rc genhtml_branch_coverage=1 00:19:00.719 --rc genhtml_function_coverage=1 00:19:00.719 --rc genhtml_legend=1 00:19:00.719 --rc geninfo_all_blocks=1 00:19:00.719 --rc geninfo_unexecuted_blocks=1 00:19:00.719 00:19:00.719 ' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.719 --rc genhtml_branch_coverage=1 00:19:00.719 --rc genhtml_function_coverage=1 00:19:00.719 --rc genhtml_legend=1 00:19:00.719 --rc geninfo_all_blocks=1 00:19:00.719 --rc geninfo_unexecuted_blocks=1 00:19:00.719 00:19:00.719 ' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.719 --rc genhtml_branch_coverage=1 00:19:00.719 --rc genhtml_function_coverage=1 00:19:00.719 --rc genhtml_legend=1 00:19:00.719 --rc geninfo_all_blocks=1 00:19:00.719 --rc geninfo_unexecuted_blocks=1 00:19:00.719 00:19:00.719 ' 00:19:00.719 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.719 --rc genhtml_branch_coverage=1 00:19:00.720 --rc genhtml_function_coverage=1 00:19:00.720 --rc genhtml_legend=1 00:19:00.720 --rc geninfo_all_blocks=1 00:19:00.720 --rc geninfo_unexecuted_blocks=1 00:19:00.720 00:19:00.720 ' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.720 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:08.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:08.998 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:08.998 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:08.998 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:19:08.998 00:19:08.998 --- 10.0.0.2 ping statistics --- 00:19:08.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.998 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:19:08.998 00:19:08.998 --- 10.0.0.1 ping statistics --- 00:19:08.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.998 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2946718 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2946718 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2946718 ']' 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.998 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:08.998 [2024-11-26 19:09:25.451308] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:08.998 [2024-11-26 19:09:25.451390] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:08.998 [2024-11-26 19:09:25.560152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.998 [2024-11-26 19:09:25.621726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.998 [2024-11-26 19:09:25.621770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.998 [2024-11-26 19:09:25.621778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.998 [2024-11-26 19:09:25.621785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.998 [2024-11-26 19:09:25.621792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.998 [2024-11-26 19:09:25.623353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.998 [2024-11-26 19:09:25.623512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.998 [2024-11-26 19:09:25.623669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.998 [2024-11-26 19:09:25.623670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 [2024-11-26 19:09:26.300233] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 Malloc0 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.259 [2024-11-26 19:09:26.353799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:09.259 { 00:19:09.259 "params": { 00:19:09.259 "name": "Nvme$subsystem", 00:19:09.259 "trtype": "$TEST_TRANSPORT", 00:19:09.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.259 "adrfam": "ipv4", 00:19:09.259 "trsvcid": "$NVMF_PORT", 00:19:09.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.259 "hdgst": ${hdgst:-false}, 00:19:09.259 "ddgst": ${ddgst:-false} 00:19:09.259 }, 00:19:09.259 "method": "bdev_nvme_attach_controller" 00:19:09.259 } 00:19:09.259 EOF 00:19:09.259 )") 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:09.259 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:09.259 "params": { 00:19:09.259 "name": "Nvme1", 00:19:09.259 "trtype": "tcp", 00:19:09.259 "traddr": "10.0.0.2", 00:19:09.259 "adrfam": "ipv4", 00:19:09.259 "trsvcid": "4420", 00:19:09.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.260 "hdgst": false, 00:19:09.260 "ddgst": false 00:19:09.260 }, 00:19:09.260 "method": "bdev_nvme_attach_controller" 00:19:09.260 }' 00:19:09.260 [2024-11-26 19:09:26.420004] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:09.260 [2024-11-26 19:09:26.420082] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2947065 ] 00:19:09.520 [2024-11-26 19:09:26.519303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.520 [2024-11-26 19:09:26.579084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.520 [2024-11-26 19:09:26.579249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.520 [2024-11-26 19:09:26.579421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.781 I/O targets: 00:19:09.781 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:09.781 00:19:09.781 00:19:09.781 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.781 http://cunit.sourceforge.net/ 00:19:09.781 00:19:09.781 00:19:09.781 Suite: bdevio tests on: Nvme1n1 00:19:09.781 Test: blockdev write read block ...passed 00:19:09.781 Test: blockdev write zeroes read block ...passed 00:19:09.781 Test: blockdev write zeroes read no split ...passed 00:19:10.041 Test: blockdev write zeroes read split ...passed 00:19:10.041 Test: blockdev write zeroes read split partial ...passed 00:19:10.041 Test: blockdev reset ...[2024-11-26 19:09:27.025957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:10.041 [2024-11-26 19:09:27.026055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106f810 (9): Bad file descriptor 00:19:10.041 [2024-11-26 19:09:27.043484] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:10.041 passed 00:19:10.041 Test: blockdev write read 8 blocks ...passed 00:19:10.041 Test: blockdev write read size > 128k ...passed 00:19:10.041 Test: blockdev write read invalid size ...passed 00:19:10.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:10.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:10.041 Test: blockdev write read max offset ...passed 00:19:10.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:10.302 Test: blockdev writev readv 8 blocks ...passed 00:19:10.302 Test: blockdev writev readv 30 x 1block ...passed 00:19:10.302 Test: blockdev writev readv block ...passed 00:19:10.302 Test: blockdev writev readv size > 128k ...passed 00:19:10.302 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:10.302 Test: blockdev comparev and writev ...[2024-11-26 19:09:27.310400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.310451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.310478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.311010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.311023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.311050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.311059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.311644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.311656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.311670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.311678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.312222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.312233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.312247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:10.302 [2024-11-26 19:09:27.312255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.302 passed 00:19:10.302 Test: blockdev nvme passthru rw ...passed 00:19:10.302 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:09:27.396842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.302 [2024-11-26 19:09:27.396857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.397237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.302 [2024-11-26 19:09:27.397249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.397599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.302 [2024-11-26 19:09:27.397609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:10.302 [2024-11-26 19:09:27.397985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:10.302 [2024-11-26 19:09:27.397997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.302 passed 00:19:10.302 Test: blockdev nvme admin passthru ...passed 00:19:10.302 Test: blockdev copy ...passed 00:19:10.302 00:19:10.302 Run Summary: Type Total Ran Passed Failed Inactive 00:19:10.302 suites 1 1 n/a 0 0 00:19:10.302 tests 23 23 23 0 0 00:19:10.302 asserts 152 152 152 0 n/a 00:19:10.302 00:19:10.302 Elapsed time = 1.161 seconds 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.562 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.821 rmmod nvme_tcp 00:19:10.822 rmmod nvme_fabrics 00:19:10.822 rmmod nvme_keyring 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2946718 ']' 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2946718 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2946718 ']' 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2946718 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2946718 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2946718' 00:19:10.822 killing process with pid 2946718 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2946718 00:19:10.822 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2946718 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.081 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.623 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.623 00:19:13.623 real 0m12.566s 00:19:13.623 user 0m14.586s 00:19:13.623 sys 0m6.626s 00:19:13.623 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.624 ************************************ 00:19:13.624 END TEST nvmf_bdevio_no_huge 00:19:13.624 ************************************ 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.624 ************************************ 00:19:13.624 START TEST nvmf_tls 00:19:13.624 ************************************ 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:13.624 * Looking for test storage... 00:19:13.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.624 --rc genhtml_branch_coverage=1 00:19:13.624 --rc genhtml_function_coverage=1 00:19:13.624 --rc genhtml_legend=1 00:19:13.624 --rc geninfo_all_blocks=1 00:19:13.624 --rc geninfo_unexecuted_blocks=1 00:19:13.624 00:19:13.624 ' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.624 --rc genhtml_branch_coverage=1 00:19:13.624 --rc genhtml_function_coverage=1 00:19:13.624 --rc genhtml_legend=1 00:19:13.624 --rc geninfo_all_blocks=1 00:19:13.624 --rc geninfo_unexecuted_blocks=1 00:19:13.624 00:19:13.624 ' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.624 --rc genhtml_branch_coverage=1 00:19:13.624 --rc genhtml_function_coverage=1 00:19:13.624 --rc genhtml_legend=1 00:19:13.624 --rc geninfo_all_blocks=1 00:19:13.624 --rc geninfo_unexecuted_blocks=1 00:19:13.624 00:19:13.624 ' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.624 --rc genhtml_branch_coverage=1 00:19:13.624 --rc genhtml_function_coverage=1 00:19:13.624 --rc genhtml_legend=1 00:19:13.624 --rc geninfo_all_blocks=1 00:19:13.624 --rc geninfo_unexecuted_blocks=1 00:19:13.624 00:19:13.624 ' 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.624 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.625 19:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:21.758 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:21.758 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:21.758 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:21.758 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.758 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.758 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.758 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.758 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.758 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:19:21.758 00:19:21.758 --- 10.0.0.2 ping statistics --- 00:19:21.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.758 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:19:21.758 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:19:21.759 00:19:21.759 --- 10.0.0.1 ping statistics --- 00:19:21.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.759 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2951474 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2951474 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2951474 ']' 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.759 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.759 [2024-11-26 19:09:38.206185] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:21.759 [2024-11-26 19:09:38.206252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.759 [2024-11-26 19:09:38.307470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.759 [2024-11-26 19:09:38.358402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.759 [2024-11-26 19:09:38.358453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.759 [2024-11-26 19:09:38.358463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.759 [2024-11-26 19:09:38.358470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.759 [2024-11-26 19:09:38.358476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.759 [2024-11-26 19:09:38.359259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:22.018 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:22.277 true 00:19:22.277 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.277 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:22.277 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:22.277 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:22.277 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:22.537 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:22.537 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:22.797 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:22.798 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:22.798 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:23.058 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.319 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:23.319 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:23.319 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:23.579 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.579 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:23.579 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:23.579 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:23.579 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:23.840 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:23.840 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.oAkoTCNcKN 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.L5PqOGh5CM 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.oAkoTCNcKN 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.L5PqOGh5CM 00:19:24.101 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:24.361 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:24.622 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.oAkoTCNcKN 00:19:24.622 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oAkoTCNcKN 00:19:24.622 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:24.883 [2024-11-26 19:09:41.850492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.883 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:24.883 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:25.142 [2024-11-26 19:09:42.219399] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.142 [2024-11-26 19:09:42.219606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.142 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.402 malloc0 00:19:25.402 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.402 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oAkoTCNcKN 00:19:25.662 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.921 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.oAkoTCNcKN 00:19:35.920 Initializing NVMe Controllers 00:19:35.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:35.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:35.920 Initialization complete. Launching workers. 00:19:35.920 ======================================================== 00:19:35.920 Latency(us) 00:19:35.920 Device Information : IOPS MiB/s Average min max 00:19:35.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18790.28 73.40 3406.21 1137.58 4011.68 00:19:35.920 ======================================================== 00:19:35.920 Total : 18790.28 73.40 3406.21 1137.58 4011.68 00:19:35.920 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAkoTCNcKN 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oAkoTCNcKN 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2954493 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2954493 /var/tmp/bdevperf.sock 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2954493 ']' 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.920 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.182 [2024-11-26 19:09:53.151659] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:36.182 [2024-11-26 19:09:53.151718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954493 ] 00:19:36.182 [2024-11-26 19:09:53.239613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.182 [2024-11-26 19:09:53.274683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.752 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.752 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.752 19:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAkoTCNcKN 00:19:37.013 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.273 [2024-11-26 19:09:54.291819] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.273 TLSTESTn1 00:19:37.273 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.273 Running I/O for 10 seconds... 00:19:39.599 6053.00 IOPS, 23.64 MiB/s [2024-11-26T18:09:57.753Z] 5828.50 IOPS, 22.77 MiB/s [2024-11-26T18:09:58.693Z] 5697.33 IOPS, 22.26 MiB/s [2024-11-26T18:09:59.637Z] 5860.00 IOPS, 22.89 MiB/s [2024-11-26T18:10:00.578Z] 5854.60 IOPS, 22.87 MiB/s [2024-11-26T18:10:01.517Z] 5938.50 IOPS, 23.20 MiB/s [2024-11-26T18:10:02.898Z] 5978.57 IOPS, 23.35 MiB/s [2024-11-26T18:10:03.841Z] 5913.50 IOPS, 23.10 MiB/s [2024-11-26T18:10:04.784Z] 5955.00 IOPS, 23.26 MiB/s [2024-11-26T18:10:04.784Z] 5975.70 IOPS, 23.34 MiB/s 00:19:47.571 Latency(us) 00:19:47.571 [2024-11-26T18:10:04.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.571 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.571 Verification LBA range: start 0x0 length 0x2000 00:19:47.571 TLSTESTn1 : 10.02 5973.72 23.33 0.00 0.00 21385.56 5652.48 33860.27 00:19:47.571 [2024-11-26T18:10:04.784Z] =================================================================================================================== 00:19:47.571 [2024-11-26T18:10:04.784Z] Total : 5973.72 23.33 0.00 0.00 21385.56 5652.48 33860.27 00:19:47.571 { 00:19:47.571 "results": [ 00:19:47.571 { 00:19:47.571 "job": "TLSTESTn1", 00:19:47.571 "core_mask": "0x4", 00:19:47.571 "workload": "verify", 00:19:47.571 "status": "finished", 00:19:47.571 "verify_range": { 00:19:47.571 "start": 0, 00:19:47.571 "length": 8192 00:19:47.571 }, 00:19:47.571 "queue_depth": 128, 00:19:47.571 "io_size": 4096, 00:19:47.571 "runtime": 10.024739, 00:19:47.571 "iops": 5973.7216101087515, 00:19:47.571 "mibps": 23.33485003948731, 00:19:47.571 "io_failed": 0, 00:19:47.571 "io_timeout": 0, 00:19:47.571 "avg_latency_us": 21385.560154741033, 00:19:47.571 "min_latency_us": 5652.48, 00:19:47.571 "max_latency_us": 33860.26666666667 00:19:47.571 } 00:19:47.571 ], 00:19:47.571 "core_count": 1 00:19:47.571 } 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2954493 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2954493 ']' 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2954493 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954493 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954493' 00:19:47.571 killing process with pid 2954493 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2954493 00:19:47.571 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.571 00:19:47.571 Latency(us) 00:19:47.571 [2024-11-26T18:10:04.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.571 [2024-11-26T18:10:04.784Z] =================================================================================================================== 00:19:47.571 [2024-11-26T18:10:04.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2954493 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5PqOGh5CM 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5PqOGh5CM 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L5PqOGh5CM 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.L5PqOGh5CM 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2956765 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2956765 /var/tmp/bdevperf.sock 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2956765 ']' 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.571 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.571 [2024-11-26 19:10:04.771255] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:47.571 [2024-11-26 19:10:04.771312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956765 ] 00:19:47.832 [2024-11-26 19:10:04.855512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.832 [2024-11-26 19:10:04.883603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.404 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.404 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.404 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.L5PqOGh5CM 00:19:48.665 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.925 [2024-11-26 19:10:05.899234] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.925 [2024-11-26 19:10:05.907006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:48.925 [2024-11-26 19:10:05.907320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2be0 (107): Transport endpoint is not connected 00:19:48.925 [2024-11-26 19:10:05.908316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2be0 (9): Bad file descriptor 00:19:48.925 [2024-11-26 19:10:05.909318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:48.925 [2024-11-26 19:10:05.909326] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:48.925 [2024-11-26 19:10:05.909332] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:48.925 [2024-11-26 19:10:05.909338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:48.925 request: 00:19:48.925 { 00:19:48.925 "name": "TLSTEST", 00:19:48.925 "trtype": "tcp", 00:19:48.925 "traddr": "10.0.0.2", 00:19:48.925 "adrfam": "ipv4", 00:19:48.925 "trsvcid": "4420", 00:19:48.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.925 "prchk_reftag": false, 00:19:48.925 "prchk_guard": false, 00:19:48.925 "hdgst": false, 00:19:48.925 "ddgst": false, 00:19:48.925 "psk": "key0", 00:19:48.925 "allow_unrecognized_csi": false, 00:19:48.925 "method": "bdev_nvme_attach_controller", 00:19:48.925 "req_id": 1 00:19:48.925 } 00:19:48.925 Got JSON-RPC error response 00:19:48.925 response: 00:19:48.925 { 00:19:48.925 "code": -5, 00:19:48.925 "message": "Input/output error" 00:19:48.925 } 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2956765 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2956765 ']' 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2956765 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.925 19:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956765 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956765' 00:19:48.925 killing process with pid 2956765 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2956765 00:19:48.925 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.925 00:19:48.925 Latency(us) 00:19:48.925 [2024-11-26T18:10:06.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.925 [2024-11-26T18:10:06.138Z] =================================================================================================================== 00:19:48.925 [2024-11-26T18:10:06.138Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2956765 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oAkoTCNcKN 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oAkoTCNcKN 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.oAkoTCNcKN 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oAkoTCNcKN 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2956934 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2956934 /var/tmp/bdevperf.sock 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2956934 ']' 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.925 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.926 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.926 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 [2024-11-26 19:10:06.155387] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:49.187 [2024-11-26 19:10:06.155445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956934 ] 00:19:49.187 [2024-11-26 19:10:06.238172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.187 [2024-11-26 19:10:06.266499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.760 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.760 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.760 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAkoTCNcKN 00:19:50.019 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:50.279 [2024-11-26 19:10:07.282655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.279 [2024-11-26 19:10:07.290280] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.279 [2024-11-26 19:10:07.290300] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.279 [2024-11-26 19:10:07.290318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.279 [2024-11-26 19:10:07.290893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ebe0 (107): Transport endpoint is not connected 00:19:50.279 [2024-11-26 19:10:07.291889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ebe0 (9): Bad file descriptor 00:19:50.279 [2024-11-26 19:10:07.292892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:50.279 [2024-11-26 19:10:07.292901] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.279 [2024-11-26 19:10:07.292908] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:50.279 [2024-11-26 19:10:07.292914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:50.279 request: 00:19:50.279 { 00:19:50.279 "name": "TLSTEST", 00:19:50.279 "trtype": "tcp", 00:19:50.279 "traddr": "10.0.0.2", 00:19:50.279 "adrfam": "ipv4", 00:19:50.280 "trsvcid": "4420", 00:19:50.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.280 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.280 "prchk_reftag": false, 00:19:50.280 "prchk_guard": false, 00:19:50.280 "hdgst": false, 00:19:50.280 "ddgst": false, 00:19:50.280 "psk": "key0", 00:19:50.280 "allow_unrecognized_csi": false, 00:19:50.280 "method": "bdev_nvme_attach_controller", 00:19:50.280 "req_id": 1 00:19:50.280 } 00:19:50.280 Got JSON-RPC error response 00:19:50.280 response: 00:19:50.280 { 00:19:50.280 "code": -5, 00:19:50.280 "message": "Input/output error" 00:19:50.280 } 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2956934 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2956934 ']' 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2956934 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956934 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956934' 00:19:50.280 killing process with pid 2956934 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2956934 00:19:50.280 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.280 00:19:50.280 Latency(us) 00:19:50.280 [2024-11-26T18:10:07.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.280 [2024-11-26T18:10:07.493Z] =================================================================================================================== 00:19:50.280 [2024-11-26T18:10:07.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2956934 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAkoTCNcKN 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAkoTCNcKN 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.oAkoTCNcKN 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oAkoTCNcKN 00:19:50.280 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2957209 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2957209 /var/tmp/bdevperf.sock 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2957209 ']' 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.541 [2024-11-26 19:10:07.539111] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:50.541 [2024-11-26 19:10:07.539173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957209 ] 00:19:50.541 [2024-11-26 19:10:07.624392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.541 [2024-11-26 19:10:07.652485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.541 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oAkoTCNcKN 00:19:50.802 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.062 [2024-11-26 19:10:08.054753] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.063 [2024-11-26 19:10:08.062156] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.063 [2024-11-26 19:10:08.062181] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.063 [2024-11-26 19:10:08.062204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.063 [2024-11-26 19:10:08.062900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd11be0 (107): Transport endpoint is not connected 00:19:51.063 [2024-11-26 19:10:08.063895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd11be0 (9): Bad file descriptor 00:19:51.063 [2024-11-26 19:10:08.064898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:51.063 [2024-11-26 19:10:08.064908] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.063 [2024-11-26 19:10:08.064914] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:51.063 [2024-11-26 19:10:08.064921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:51.063 request: 00:19:51.063 { 00:19:51.063 "name": "TLSTEST", 00:19:51.063 "trtype": "tcp", 00:19:51.063 "traddr": "10.0.0.2", 00:19:51.063 "adrfam": "ipv4", 00:19:51.063 "trsvcid": "4420", 00:19:51.063 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.063 "prchk_reftag": false, 00:19:51.063 "prchk_guard": false, 00:19:51.063 "hdgst": false, 00:19:51.063 "ddgst": false, 00:19:51.063 "psk": "key0", 00:19:51.063 "allow_unrecognized_csi": false, 00:19:51.063 "method": "bdev_nvme_attach_controller", 00:19:51.063 "req_id": 1 00:19:51.063 } 00:19:51.063 Got JSON-RPC error response 00:19:51.063 response: 00:19:51.063 { 00:19:51.063 "code": -5, 00:19:51.063 "message": "Input/output error" 00:19:51.063 } 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2957209 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2957209 ']' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2957209 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957209 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957209' 00:19:51.063 killing process with pid 2957209 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2957209 00:19:51.063 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.063 00:19:51.063 Latency(us) 00:19:51.063 [2024-11-26T18:10:08.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.063 [2024-11-26T18:10:08.276Z] =================================================================================================================== 00:19:51.063 [2024-11-26T18:10:08.276Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2957209 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2957496 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2957496 /var/tmp/bdevperf.sock 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2957496 ']' 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.063 19:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.324 [2024-11-26 19:10:08.313113] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:51.324 [2024-11-26 19:10:08.313178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957496 ] 00:19:51.324 [2024-11-26 19:10:08.396824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.324 [2024-11-26 19:10:08.425092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:52.265 [2024-11-26 19:10:09.256382] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:52.265 [2024-11-26 19:10:09.256410] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:52.265 request: 00:19:52.265 { 00:19:52.265 "name": "key0", 00:19:52.265 "path": "", 00:19:52.265 "method": "keyring_file_add_key", 00:19:52.265 "req_id": 1 00:19:52.265 } 00:19:52.265 Got JSON-RPC error response 00:19:52.265 response: 00:19:52.265 { 00:19:52.265 "code": -1, 00:19:52.265 "message": "Operation not permitted" 00:19:52.265 } 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.265 [2024-11-26 19:10:09.432910] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.265 [2024-11-26 19:10:09.432933] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:52.265 request: 00:19:52.265 { 00:19:52.265 "name": "TLSTEST", 00:19:52.265 "trtype": "tcp", 00:19:52.265 "traddr": "10.0.0.2", 00:19:52.265 "adrfam": "ipv4", 00:19:52.265 "trsvcid": "4420", 00:19:52.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.265 "prchk_reftag": false, 00:19:52.265 "prchk_guard": false, 00:19:52.265 "hdgst": false, 00:19:52.265 "ddgst": false, 00:19:52.265 "psk": "key0", 00:19:52.265 "allow_unrecognized_csi": false, 00:19:52.265 "method": "bdev_nvme_attach_controller", 00:19:52.265 "req_id": 1 00:19:52.265 } 00:19:52.265 Got JSON-RPC error response 00:19:52.265 response: 00:19:52.265 { 00:19:52.265 "code": -126, 00:19:52.265 "message": "Required key not available" 00:19:52.265 } 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2957496 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2957496 ']' 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2957496 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.265 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957496 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957496' 00:19:52.526 killing process with pid 2957496 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2957496 00:19:52.526 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.526 00:19:52.526 Latency(us) 00:19:52.526 [2024-11-26T18:10:09.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.526 [2024-11-26T18:10:09.739Z] =================================================================================================================== 00:19:52.526 [2024-11-26T18:10:09.739Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2957496 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2951474 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2951474 ']' 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2951474 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951474 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951474' 00:19:52.526 killing process with pid 2951474 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2951474 00:19:52.526 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2951474 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.A5f6G1FWFx 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.A5f6G1FWFx 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2957795 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2957795 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2957795 ']' 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.787 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.787 [2024-11-26 19:10:09.925948] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:52.788 [2024-11-26 19:10:09.926013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.048 [2024-11-26 19:10:10.017783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.048 [2024-11-26 19:10:10.055904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.048 [2024-11-26 19:10:10.055936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.048 [2024-11-26 19:10:10.055943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.048 [2024-11-26 19:10:10.055948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.048 [2024-11-26 19:10:10.055952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.048 [2024-11-26 19:10:10.056452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A5f6G1FWFx 00:19:53.619 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.881 [2024-11-26 19:10:10.913073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.881 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.141 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.141 [2024-11-26 19:10:11.269943] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.141 [2024-11-26 19:10:11.270139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.141 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.401 malloc0 00:19:54.401 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.663 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:19:54.663 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5f6G1FWFx 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A5f6G1FWFx 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2958258 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2958258 /var/tmp/bdevperf.sock 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2958258 ']' 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.923 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.923 [2024-11-26 19:10:12.067273] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:19:54.923 [2024-11-26 19:10:12.067326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958258 ] 00:19:55.184 [2024-11-26 19:10:12.150871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.184 [2024-11-26 19:10:12.179499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.184 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.184 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.184 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:19:55.445 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.445 [2024-11-26 19:10:12.601818] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.706 TLSTESTn1 00:19:55.706 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.706 Running I/O for 10 seconds... 00:19:57.585 4927.00 IOPS, 19.25 MiB/s [2024-11-26T18:10:16.179Z] 5190.50 IOPS, 20.28 MiB/s [2024-11-26T18:10:17.162Z] 5290.67 IOPS, 20.67 MiB/s [2024-11-26T18:10:17.804Z] 5492.75 IOPS, 21.46 MiB/s [2024-11-26T18:10:19.188Z] 5582.80 IOPS, 21.81 MiB/s [2024-11-26T18:10:20.128Z] 5626.67 IOPS, 21.98 MiB/s [2024-11-26T18:10:21.070Z] 5744.43 IOPS, 22.44 MiB/s [2024-11-26T18:10:22.013Z] 5749.75 IOPS, 22.46 MiB/s [2024-11-26T18:10:22.956Z] 5654.67 IOPS, 22.09 MiB/s [2024-11-26T18:10:22.956Z] 5658.80 IOPS, 22.10 MiB/s 00:20:05.743 Latency(us) 00:20:05.743 [2024-11-26T18:10:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.743 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.743 Verification LBA range: start 0x0 length 0x2000 00:20:05.743 TLSTESTn1 : 10.02 5660.90 22.11 0.00 0.00 22574.34 5980.16 31894.19 00:20:05.743 [2024-11-26T18:10:22.956Z] =================================================================================================================== 00:20:05.743 [2024-11-26T18:10:22.956Z] Total : 5660.90 22.11 0.00 0.00 22574.34 5980.16 31894.19 00:20:05.743 { 00:20:05.743 "results": [ 00:20:05.743 { 00:20:05.743 "job": "TLSTESTn1", 00:20:05.743 "core_mask": "0x4", 00:20:05.743 "workload": "verify", 00:20:05.743 "status": "finished", 00:20:05.743 "verify_range": { 00:20:05.743 "start": 0, 00:20:05.743 "length": 8192 00:20:05.743 }, 00:20:05.743 "queue_depth": 128, 00:20:05.743 "io_size": 4096, 00:20:05.743 "runtime": 10.018724, 00:20:05.743 "iops": 5660.900529847912, 00:20:05.743 "mibps": 22.112892694718408, 00:20:05.743 "io_failed": 0, 00:20:05.743 "io_timeout": 0, 00:20:05.743 "avg_latency_us": 22574.34068682594, 00:20:05.743 "min_latency_us": 5980.16, 00:20:05.743 "max_latency_us": 31894.18666666667 00:20:05.743 } 00:20:05.743 ], 00:20:05.743 "core_count": 1 00:20:05.743 } 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2958258 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2958258 ']' 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2958258 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958258 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958258' 00:20:05.743 killing process with pid 2958258 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2958258 00:20:05.743 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.743 00:20:05.743 Latency(us) 00:20:05.743 [2024-11-26T18:10:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.743 [2024-11-26T18:10:22.956Z] =================================================================================================================== 00:20:05.743 [2024-11-26T18:10:22.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.743 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2958258 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.A5f6G1FWFx 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5f6G1FWFx 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5f6G1FWFx 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.A5f6G1FWFx 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.A5f6G1FWFx 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2960282 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2960282 /var/tmp/bdevperf.sock 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2960282 ']' 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.004 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.004 [2024-11-26 19:10:23.085395] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:06.004 [2024-11-26 19:10:23.085449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960282 ] 00:20:06.004 [2024-11-26 19:10:23.169202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.004 [2024-11-26 19:10:23.196678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.946 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.946 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.946 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:06.946 [2024-11-26 19:10:24.040123] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.A5f6G1FWFx': 0100666 00:20:06.946 [2024-11-26 19:10:24.040149] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:06.946 request: 00:20:06.946 { 00:20:06.946 "name": "key0", 00:20:06.946 "path": "/tmp/tmp.A5f6G1FWFx", 00:20:06.946 "method": "keyring_file_add_key", 00:20:06.946 "req_id": 1 00:20:06.946 } 00:20:06.946 Got JSON-RPC error response 00:20:06.946 response: 00:20:06.946 { 00:20:06.946 "code": -1, 00:20:06.946 "message": "Operation not permitted" 00:20:06.946 } 00:20:06.946 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.206 [2024-11-26 19:10:24.216648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.206 [2024-11-26 19:10:24.216671] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:07.206 request: 00:20:07.206 { 00:20:07.206 "name": "TLSTEST", 00:20:07.206 "trtype": "tcp", 00:20:07.206 "traddr": "10.0.0.2", 00:20:07.206 "adrfam": "ipv4", 00:20:07.206 "trsvcid": "4420", 00:20:07.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.206 "prchk_reftag": false, 00:20:07.206 "prchk_guard": false, 00:20:07.206 "hdgst": false, 00:20:07.206 "ddgst": false, 00:20:07.206 "psk": "key0", 00:20:07.206 "allow_unrecognized_csi": false, 00:20:07.207 "method": "bdev_nvme_attach_controller", 00:20:07.207 "req_id": 1 00:20:07.207 } 00:20:07.207 Got JSON-RPC error response 00:20:07.207 response: 00:20:07.207 { 00:20:07.207 "code": -126, 00:20:07.207 "message": "Required key not available" 00:20:07.207 } 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2960282 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2960282 ']' 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2960282 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960282 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960282' 00:20:07.207 killing process with pid 2960282 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2960282 00:20:07.207 Received shutdown signal, test time was about 10.000000 seconds 00:20:07.207 00:20:07.207 Latency(us) 00:20:07.207 [2024-11-26T18:10:24.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.207 [2024-11-26T18:10:24.420Z] =================================================================================================================== 00:20:07.207 [2024-11-26T18:10:24.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2960282 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2957795 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2957795 ']' 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2957795 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.207 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957795 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957795' 00:20:07.467 killing process with pid 2957795 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2957795 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2957795 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2960627 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2960627 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2960627 ']' 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.467 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.467 [2024-11-26 19:10:24.641818] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:07.467 [2024-11-26 19:10:24.641869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.729 [2024-11-26 19:10:24.709116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.729 [2024-11-26 19:10:24.736802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.729 [2024-11-26 19:10:24.736832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.729 [2024-11-26 19:10:24.736839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.729 [2024-11-26 19:10:24.736844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.729 [2024-11-26 19:10:24.736849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.729 [2024-11-26 19:10:24.737304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A5f6G1FWFx 00:20:07.729 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.990 [2024-11-26 19:10:25.024317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.990 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:08.251 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:08.251 [2024-11-26 19:10:25.389209] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.251 [2024-11-26 19:10:25.389413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.251 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.511 malloc0 00:20:08.511 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.772 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:08.772 [2024-11-26 19:10:25.928368] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.A5f6G1FWFx': 0100666 00:20:08.772 [2024-11-26 19:10:25.928392] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:08.772 request: 00:20:08.772 { 00:20:08.772 "name": "key0", 00:20:08.772 "path": "/tmp/tmp.A5f6G1FWFx", 00:20:08.772 "method": "keyring_file_add_key", 00:20:08.772 "req_id": 1 00:20:08.772 } 00:20:08.772 Got JSON-RPC error response 00:20:08.772 response: 00:20:08.772 { 00:20:08.772 "code": -1, 00:20:08.772 "message": "Operation not permitted" 00:20:08.772 } 00:20:08.772 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.033 [2024-11-26 19:10:26.104824] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:09.033 [2024-11-26 19:10:26.104847] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:09.033 request: 00:20:09.033 { 00:20:09.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.033 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.033 "psk": "key0", 00:20:09.033 "method": "nvmf_subsystem_add_host", 00:20:09.033 "req_id": 1 00:20:09.033 } 00:20:09.033 Got JSON-RPC error response 00:20:09.034 response: 00:20:09.034 { 00:20:09.034 "code": -32603, 00:20:09.034 "message": "Internal error" 00:20:09.034 } 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2960627 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2960627 ']' 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2960627 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960627 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960627' 00:20:09.034 killing process with pid 2960627 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2960627 00:20:09.034 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2960627 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.A5f6G1FWFx 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2960996 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2960996 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2960996 ']' 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.294 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.294 [2024-11-26 19:10:26.370847] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:09.294 [2024-11-26 19:10:26.370899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.294 [2024-11-26 19:10:26.458473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.294 [2024-11-26 19:10:26.486542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.294 [2024-11-26 19:10:26.486575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.294 [2024-11-26 19:10:26.486581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.294 [2024-11-26 19:10:26.486587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.294 [2024-11-26 19:10:26.486591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.294 [2024-11-26 19:10:26.487049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A5f6G1FWFx 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.237 [2024-11-26 19:10:27.367692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.237 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.497 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.758 [2024-11-26 19:10:27.728583] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.758 [2024-11-26 19:10:27.728783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.759 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.759 malloc0 00:20:10.759 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.019 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2961362 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2961362 /var/tmp/bdevperf.sock 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2961362 ']' 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.280 19:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.540 [2024-11-26 19:10:28.502092] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:11.540 [2024-11-26 19:10:28.502143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961362 ] 00:20:11.540 [2024-11-26 19:10:28.591957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.540 [2024-11-26 19:10:28.627155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.115 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.115 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.115 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:12.374 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.635 [2024-11-26 19:10:29.651996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.635 TLSTESTn1 00:20:12.635 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:12.896 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:12.896 "subsystems": [ 00:20:12.896 { 00:20:12.896 "subsystem": "keyring", 00:20:12.896 "config": [ 00:20:12.896 { 00:20:12.896 "method": "keyring_file_add_key", 00:20:12.896 "params": { 00:20:12.896 "name": "key0", 00:20:12.896 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:12.896 } 00:20:12.896 } 00:20:12.896 ] 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "subsystem": "iobuf", 00:20:12.896 "config": [ 00:20:12.896 { 00:20:12.896 "method": "iobuf_set_options", 00:20:12.896 "params": { 00:20:12.896 "small_pool_count": 8192, 00:20:12.896 "large_pool_count": 1024, 00:20:12.896 "small_bufsize": 8192, 00:20:12.896 "large_bufsize": 135168, 00:20:12.896 "enable_numa": false 00:20:12.896 } 00:20:12.896 } 00:20:12.896 ] 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "subsystem": "sock", 00:20:12.896 "config": [ 00:20:12.896 { 00:20:12.896 "method": "sock_set_default_impl", 00:20:12.896 "params": { 00:20:12.896 "impl_name": "posix" 00:20:12.896 } 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "method": "sock_impl_set_options", 00:20:12.896 "params": { 00:20:12.896 "impl_name": "ssl", 00:20:12.896 "recv_buf_size": 4096, 00:20:12.896 "send_buf_size": 4096, 00:20:12.896 "enable_recv_pipe": true, 00:20:12.896 "enable_quickack": false, 00:20:12.896 "enable_placement_id": 0, 00:20:12.896 "enable_zerocopy_send_server": true, 00:20:12.896 "enable_zerocopy_send_client": false, 00:20:12.896 "zerocopy_threshold": 0, 00:20:12.896 "tls_version": 0, 00:20:12.896 "enable_ktls": false 00:20:12.896 } 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "method": "sock_impl_set_options", 00:20:12.896 "params": { 00:20:12.896 "impl_name": "posix", 00:20:12.896 "recv_buf_size": 2097152, 00:20:12.896 "send_buf_size": 2097152, 00:20:12.896 "enable_recv_pipe": true, 00:20:12.896 "enable_quickack": false, 00:20:12.896 "enable_placement_id": 0, 00:20:12.896 "enable_zerocopy_send_server": true, 00:20:12.896 "enable_zerocopy_send_client": false, 00:20:12.896 "zerocopy_threshold": 0, 00:20:12.896 "tls_version": 0, 00:20:12.896 "enable_ktls": false 00:20:12.896 } 00:20:12.896 } 00:20:12.896 ] 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "subsystem": "vmd", 00:20:12.896 "config": [] 00:20:12.896 }, 00:20:12.896 { 00:20:12.896 "subsystem": "accel", 00:20:12.896 "config": [ 00:20:12.896 { 00:20:12.896 "method": "accel_set_options", 00:20:12.897 "params": { 00:20:12.897 "small_cache_size": 128, 00:20:12.897 "large_cache_size": 16, 00:20:12.897 "task_count": 2048, 00:20:12.897 "sequence_count": 2048, 00:20:12.897 "buf_count": 2048 00:20:12.897 } 00:20:12.897 } 00:20:12.897 ] 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "subsystem": "bdev", 00:20:12.897 "config": [ 00:20:12.897 { 00:20:12.897 "method": "bdev_set_options", 00:20:12.897 "params": { 00:20:12.897 "bdev_io_pool_size": 65535, 00:20:12.897 "bdev_io_cache_size": 256, 00:20:12.897 "bdev_auto_examine": true, 00:20:12.897 "iobuf_small_cache_size": 128, 00:20:12.897 "iobuf_large_cache_size": 16 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_raid_set_options", 00:20:12.897 "params": { 00:20:12.897 "process_window_size_kb": 1024, 00:20:12.897 "process_max_bandwidth_mb_sec": 0 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_iscsi_set_options", 00:20:12.897 "params": { 00:20:12.897 "timeout_sec": 30 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_nvme_set_options", 00:20:12.897 "params": { 00:20:12.897 "action_on_timeout": "none", 00:20:12.897 "timeout_us": 0, 00:20:12.897 "timeout_admin_us": 0, 00:20:12.897 "keep_alive_timeout_ms": 10000, 00:20:12.897 "arbitration_burst": 0, 00:20:12.897 "low_priority_weight": 0, 00:20:12.897 "medium_priority_weight": 0, 00:20:12.897 "high_priority_weight": 0, 00:20:12.897 "nvme_adminq_poll_period_us": 10000, 00:20:12.897 "nvme_ioq_poll_period_us": 0, 00:20:12.897 "io_queue_requests": 0, 00:20:12.897 "delay_cmd_submit": true, 00:20:12.897 "transport_retry_count": 4, 00:20:12.897 "bdev_retry_count": 3, 00:20:12.897 "transport_ack_timeout": 0, 00:20:12.897 "ctrlr_loss_timeout_sec": 0, 00:20:12.897 "reconnect_delay_sec": 0, 00:20:12.897 "fast_io_fail_timeout_sec": 0, 00:20:12.897 "disable_auto_failback": false, 00:20:12.897 "generate_uuids": false, 00:20:12.897 "transport_tos": 0, 00:20:12.897 "nvme_error_stat": false, 00:20:12.897 "rdma_srq_size": 0, 00:20:12.897 "io_path_stat": false, 00:20:12.897 "allow_accel_sequence": false, 00:20:12.897 "rdma_max_cq_size": 0, 00:20:12.897 "rdma_cm_event_timeout_ms": 0, 00:20:12.897 "dhchap_digests": [ 00:20:12.897 "sha256", 00:20:12.897 "sha384", 00:20:12.897 "sha512" 00:20:12.897 ], 00:20:12.897 "dhchap_dhgroups": [ 00:20:12.897 "null", 00:20:12.897 "ffdhe2048", 00:20:12.897 "ffdhe3072", 00:20:12.897 "ffdhe4096", 00:20:12.897 "ffdhe6144", 00:20:12.897 "ffdhe8192" 00:20:12.897 ] 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_nvme_set_hotplug", 00:20:12.897 "params": { 00:20:12.897 "period_us": 100000, 00:20:12.897 "enable": false 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_malloc_create", 00:20:12.897 "params": { 00:20:12.897 "name": "malloc0", 00:20:12.897 "num_blocks": 8192, 00:20:12.897 "block_size": 4096, 00:20:12.897 "physical_block_size": 4096, 00:20:12.897 "uuid": "bcff440c-5553-4686-af33-e6ec634eb1f8", 00:20:12.897 "optimal_io_boundary": 0, 00:20:12.897 "md_size": 0, 00:20:12.897 "dif_type": 0, 00:20:12.897 "dif_is_head_of_md": false, 00:20:12.897 "dif_pi_format": 0 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "bdev_wait_for_examine" 00:20:12.897 } 00:20:12.897 ] 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "subsystem": "nbd", 00:20:12.897 "config": [] 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "subsystem": "scheduler", 00:20:12.897 "config": [ 00:20:12.897 { 00:20:12.897 "method": "framework_set_scheduler", 00:20:12.897 "params": { 00:20:12.897 "name": "static" 00:20:12.897 } 00:20:12.897 } 00:20:12.897 ] 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "subsystem": "nvmf", 00:20:12.897 "config": [ 00:20:12.897 { 00:20:12.897 "method": "nvmf_set_config", 00:20:12.897 "params": { 00:20:12.897 "discovery_filter": "match_any", 00:20:12.897 "admin_cmd_passthru": { 00:20:12.897 "identify_ctrlr": false 00:20:12.897 }, 00:20:12.897 "dhchap_digests": [ 00:20:12.897 "sha256", 00:20:12.897 "sha384", 00:20:12.897 "sha512" 00:20:12.897 ], 00:20:12.897 "dhchap_dhgroups": [ 00:20:12.897 "null", 00:20:12.897 "ffdhe2048", 00:20:12.897 "ffdhe3072", 00:20:12.897 "ffdhe4096", 00:20:12.897 "ffdhe6144", 00:20:12.897 "ffdhe8192" 00:20:12.897 ] 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_set_max_subsystems", 00:20:12.897 "params": { 00:20:12.897 "max_subsystems": 1024 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_set_crdt", 00:20:12.897 "params": { 00:20:12.897 "crdt1": 0, 00:20:12.897 "crdt2": 0, 00:20:12.897 "crdt3": 0 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_create_transport", 00:20:12.897 "params": { 00:20:12.897 "trtype": "TCP", 00:20:12.897 "max_queue_depth": 128, 00:20:12.897 "max_io_qpairs_per_ctrlr": 127, 00:20:12.897 "in_capsule_data_size": 4096, 00:20:12.897 "max_io_size": 131072, 00:20:12.897 "io_unit_size": 131072, 00:20:12.897 "max_aq_depth": 128, 00:20:12.897 "num_shared_buffers": 511, 00:20:12.897 "buf_cache_size": 4294967295, 00:20:12.897 "dif_insert_or_strip": false, 00:20:12.897 "zcopy": false, 00:20:12.897 "c2h_success": false, 00:20:12.897 "sock_priority": 0, 00:20:12.897 "abort_timeout_sec": 1, 00:20:12.897 "ack_timeout": 0, 00:20:12.897 "data_wr_pool_size": 0 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_create_subsystem", 00:20:12.897 "params": { 00:20:12.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.897 "allow_any_host": false, 00:20:12.897 "serial_number": "SPDK00000000000001", 00:20:12.897 "model_number": "SPDK bdev Controller", 00:20:12.897 "max_namespaces": 10, 00:20:12.897 "min_cntlid": 1, 00:20:12.897 "max_cntlid": 65519, 00:20:12.897 "ana_reporting": false 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_subsystem_add_host", 00:20:12.897 "params": { 00:20:12.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.897 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.897 "psk": "key0" 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_subsystem_add_ns", 00:20:12.897 "params": { 00:20:12.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.897 "namespace": { 00:20:12.897 "nsid": 1, 00:20:12.897 "bdev_name": "malloc0", 00:20:12.897 "nguid": "BCFF440C55534686AF33E6EC634EB1F8", 00:20:12.897 "uuid": "bcff440c-5553-4686-af33-e6ec634eb1f8", 00:20:12.897 "no_auto_visible": false 00:20:12.897 } 00:20:12.897 } 00:20:12.897 }, 00:20:12.897 { 00:20:12.897 "method": "nvmf_subsystem_add_listener", 00:20:12.897 "params": { 00:20:12.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.897 "listen_address": { 00:20:12.897 "trtype": "TCP", 00:20:12.897 "adrfam": "IPv4", 00:20:12.897 "traddr": "10.0.0.2", 00:20:12.897 "trsvcid": "4420" 00:20:12.897 }, 00:20:12.897 "secure_channel": true 00:20:12.897 } 00:20:12.897 } 00:20:12.897 ] 00:20:12.897 } 00:20:12.897 ] 00:20:12.897 }' 00:20:12.897 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:13.159 "subsystems": [ 00:20:13.159 { 00:20:13.159 "subsystem": "keyring", 00:20:13.159 "config": [ 00:20:13.159 { 00:20:13.159 "method": "keyring_file_add_key", 00:20:13.159 "params": { 00:20:13.159 "name": "key0", 00:20:13.159 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:13.159 } 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "iobuf", 00:20:13.159 "config": [ 00:20:13.159 { 00:20:13.159 "method": "iobuf_set_options", 00:20:13.159 "params": { 00:20:13.159 "small_pool_count": 8192, 00:20:13.159 "large_pool_count": 1024, 00:20:13.159 "small_bufsize": 8192, 00:20:13.159 "large_bufsize": 135168, 00:20:13.159 "enable_numa": false 00:20:13.159 } 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "sock", 00:20:13.159 "config": [ 00:20:13.159 { 00:20:13.159 "method": "sock_set_default_impl", 00:20:13.159 "params": { 00:20:13.159 "impl_name": "posix" 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "sock_impl_set_options", 00:20:13.159 "params": { 00:20:13.159 "impl_name": "ssl", 00:20:13.159 "recv_buf_size": 4096, 00:20:13.159 "send_buf_size": 4096, 00:20:13.159 "enable_recv_pipe": true, 00:20:13.159 "enable_quickack": false, 00:20:13.159 "enable_placement_id": 0, 00:20:13.159 "enable_zerocopy_send_server": true, 00:20:13.159 "enable_zerocopy_send_client": false, 00:20:13.159 "zerocopy_threshold": 0, 00:20:13.159 "tls_version": 0, 00:20:13.159 "enable_ktls": false 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "sock_impl_set_options", 00:20:13.159 "params": { 00:20:13.159 "impl_name": "posix", 00:20:13.159 "recv_buf_size": 2097152, 00:20:13.159 "send_buf_size": 2097152, 00:20:13.159 "enable_recv_pipe": true, 00:20:13.159 "enable_quickack": false, 00:20:13.159 "enable_placement_id": 0, 00:20:13.159 "enable_zerocopy_send_server": true, 00:20:13.159 "enable_zerocopy_send_client": false, 00:20:13.159 "zerocopy_threshold": 0, 00:20:13.159 "tls_version": 0, 00:20:13.159 "enable_ktls": false 00:20:13.159 } 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "vmd", 00:20:13.159 "config": [] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "accel", 00:20:13.159 "config": [ 00:20:13.159 { 00:20:13.159 "method": "accel_set_options", 00:20:13.159 "params": { 00:20:13.159 "small_cache_size": 128, 00:20:13.159 "large_cache_size": 16, 00:20:13.159 "task_count": 2048, 00:20:13.159 "sequence_count": 2048, 00:20:13.159 "buf_count": 2048 00:20:13.159 } 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "bdev", 00:20:13.159 "config": [ 00:20:13.159 { 00:20:13.159 "method": "bdev_set_options", 00:20:13.159 "params": { 00:20:13.159 "bdev_io_pool_size": 65535, 00:20:13.159 "bdev_io_cache_size": 256, 00:20:13.159 "bdev_auto_examine": true, 00:20:13.159 "iobuf_small_cache_size": 128, 00:20:13.159 "iobuf_large_cache_size": 16 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_raid_set_options", 00:20:13.159 "params": { 00:20:13.159 "process_window_size_kb": 1024, 00:20:13.159 "process_max_bandwidth_mb_sec": 0 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_iscsi_set_options", 00:20:13.159 "params": { 00:20:13.159 "timeout_sec": 30 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_nvme_set_options", 00:20:13.159 "params": { 00:20:13.159 "action_on_timeout": "none", 00:20:13.159 "timeout_us": 0, 00:20:13.159 "timeout_admin_us": 0, 00:20:13.159 "keep_alive_timeout_ms": 10000, 00:20:13.159 "arbitration_burst": 0, 00:20:13.159 "low_priority_weight": 0, 00:20:13.159 "medium_priority_weight": 0, 00:20:13.159 "high_priority_weight": 0, 00:20:13.159 "nvme_adminq_poll_period_us": 10000, 00:20:13.159 "nvme_ioq_poll_period_us": 0, 00:20:13.159 "io_queue_requests": 512, 00:20:13.159 "delay_cmd_submit": true, 00:20:13.159 "transport_retry_count": 4, 00:20:13.159 "bdev_retry_count": 3, 00:20:13.159 "transport_ack_timeout": 0, 00:20:13.159 "ctrlr_loss_timeout_sec": 0, 00:20:13.159 "reconnect_delay_sec": 0, 00:20:13.159 "fast_io_fail_timeout_sec": 0, 00:20:13.159 "disable_auto_failback": false, 00:20:13.159 "generate_uuids": false, 00:20:13.159 "transport_tos": 0, 00:20:13.159 "nvme_error_stat": false, 00:20:13.159 "rdma_srq_size": 0, 00:20:13.159 "io_path_stat": false, 00:20:13.159 "allow_accel_sequence": false, 00:20:13.159 "rdma_max_cq_size": 0, 00:20:13.159 "rdma_cm_event_timeout_ms": 0, 00:20:13.159 "dhchap_digests": [ 00:20:13.159 "sha256", 00:20:13.159 "sha384", 00:20:13.159 "sha512" 00:20:13.159 ], 00:20:13.159 "dhchap_dhgroups": [ 00:20:13.159 "null", 00:20:13.159 "ffdhe2048", 00:20:13.159 "ffdhe3072", 00:20:13.159 "ffdhe4096", 00:20:13.159 "ffdhe6144", 00:20:13.159 "ffdhe8192" 00:20:13.159 ] 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_nvme_attach_controller", 00:20:13.159 "params": { 00:20:13.159 "name": "TLSTEST", 00:20:13.159 "trtype": "TCP", 00:20:13.159 "adrfam": "IPv4", 00:20:13.159 "traddr": "10.0.0.2", 00:20:13.159 "trsvcid": "4420", 00:20:13.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.159 "prchk_reftag": false, 00:20:13.159 "prchk_guard": false, 00:20:13.159 "ctrlr_loss_timeout_sec": 0, 00:20:13.159 "reconnect_delay_sec": 0, 00:20:13.159 "fast_io_fail_timeout_sec": 0, 00:20:13.159 "psk": "key0", 00:20:13.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.159 "hdgst": false, 00:20:13.159 "ddgst": false, 00:20:13.159 "multipath": "multipath" 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_nvme_set_hotplug", 00:20:13.159 "params": { 00:20:13.159 "period_us": 100000, 00:20:13.159 "enable": false 00:20:13.159 } 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "method": "bdev_wait_for_examine" 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }, 00:20:13.159 { 00:20:13.159 "subsystem": "nbd", 00:20:13.159 "config": [] 00:20:13.159 } 00:20:13.159 ] 00:20:13.159 }' 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2961362 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2961362 ']' 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2961362 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2961362 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2961362' 00:20:13.159 killing process with pid 2961362 00:20:13.159 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2961362 00:20:13.160 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.160 00:20:13.160 Latency(us) 00:20:13.160 [2024-11-26T18:10:30.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.160 [2024-11-26T18:10:30.373Z] =================================================================================================================== 00:20:13.160 [2024-11-26T18:10:30.373Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.160 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2961362 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2960996 ']' 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960996' 00:20:13.420 killing process with pid 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2960996 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.420 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:13.420 "subsystems": [ 00:20:13.420 { 00:20:13.420 "subsystem": "keyring", 00:20:13.420 "config": [ 00:20:13.420 { 00:20:13.420 "method": "keyring_file_add_key", 00:20:13.420 "params": { 00:20:13.420 "name": "key0", 00:20:13.420 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:13.420 } 00:20:13.420 } 00:20:13.420 ] 00:20:13.420 }, 00:20:13.420 { 00:20:13.420 "subsystem": "iobuf", 00:20:13.420 "config": [ 00:20:13.420 { 00:20:13.420 "method": "iobuf_set_options", 00:20:13.420 "params": { 00:20:13.420 "small_pool_count": 8192, 00:20:13.420 "large_pool_count": 1024, 00:20:13.420 "small_bufsize": 8192, 00:20:13.420 "large_bufsize": 135168, 00:20:13.420 "enable_numa": false 00:20:13.420 } 00:20:13.420 } 00:20:13.420 ] 00:20:13.420 }, 00:20:13.421 { 00:20:13.421 "subsystem": "sock", 00:20:13.421 "config": [ 00:20:13.421 { 00:20:13.421 "method": "sock_set_default_impl", 00:20:13.421 "params": { 00:20:13.421 "impl_name": "posix" 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "sock_impl_set_options", 00:20:13.421 "params": { 00:20:13.421 "impl_name": "ssl", 00:20:13.421 "recv_buf_size": 4096, 00:20:13.421 "send_buf_size": 4096, 00:20:13.421 "enable_recv_pipe": true, 00:20:13.421 "enable_quickack": false, 00:20:13.421 "enable_placement_id": 0, 00:20:13.421 "enable_zerocopy_send_server": true, 00:20:13.421 "enable_zerocopy_send_client": false, 00:20:13.421 "zerocopy_threshold": 0, 00:20:13.421 "tls_version": 0, 00:20:13.421 "enable_ktls": false 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "sock_impl_set_options", 00:20:13.421 "params": { 00:20:13.421 "impl_name": "posix", 00:20:13.421 "recv_buf_size": 2097152, 00:20:13.421 "send_buf_size": 2097152, 00:20:13.421 "enable_recv_pipe": true, 00:20:13.421 "enable_quickack": false, 00:20:13.421 "enable_placement_id": 0, 00:20:13.421 "enable_zerocopy_send_server": true, 00:20:13.421 "enable_zerocopy_send_client": false, 00:20:13.421 "zerocopy_threshold": 0, 00:20:13.421 "tls_version": 0, 00:20:13.421 "enable_ktls": false 00:20:13.421 } 00:20:13.421 } 00:20:13.421 ] 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "subsystem": "vmd", 00:20:13.421 "config": [] 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "subsystem": "accel", 00:20:13.421 "config": [ 00:20:13.421 { 00:20:13.421 "method": "accel_set_options", 00:20:13.421 "params": { 00:20:13.421 "small_cache_size": 128, 00:20:13.421 "large_cache_size": 16, 00:20:13.421 "task_count": 2048, 00:20:13.421 "sequence_count": 2048, 00:20:13.421 "buf_count": 2048 00:20:13.421 } 00:20:13.421 } 00:20:13.421 ] 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "subsystem": "bdev", 00:20:13.421 "config": [ 00:20:13.421 { 00:20:13.421 "method": "bdev_set_options", 00:20:13.421 "params": { 00:20:13.421 "bdev_io_pool_size": 65535, 00:20:13.421 "bdev_io_cache_size": 256, 00:20:13.421 "bdev_auto_examine": true, 00:20:13.421 "iobuf_small_cache_size": 128, 00:20:13.421 "iobuf_large_cache_size": 16 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_raid_set_options", 00:20:13.421 "params": { 00:20:13.421 "process_window_size_kb": 1024, 00:20:13.421 "process_max_bandwidth_mb_sec": 0 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_iscsi_set_options", 00:20:13.421 "params": { 00:20:13.421 "timeout_sec": 30 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_nvme_set_options", 00:20:13.421 "params": { 00:20:13.421 "action_on_timeout": "none", 00:20:13.421 "timeout_us": 0, 00:20:13.421 "timeout_admin_us": 0, 00:20:13.421 "keep_alive_timeout_ms": 10000, 00:20:13.421 "arbitration_burst": 0, 00:20:13.421 "low_priority_weight": 0, 00:20:13.421 "medium_priority_weight": 0, 00:20:13.421 "high_priority_weight": 0, 00:20:13.421 "nvme_adminq_poll_period_us": 10000, 00:20:13.421 "nvme_ioq_poll_period_us": 0, 00:20:13.421 "io_queue_requests": 0, 00:20:13.421 "delay_cmd_submit": true, 00:20:13.421 "transport_retry_count": 4, 00:20:13.421 "bdev_retry_count": 3, 00:20:13.421 "transport_ack_timeout": 0, 00:20:13.421 "ctrlr_loss_timeout_sec": 0, 00:20:13.421 "reconnect_delay_sec": 0, 00:20:13.421 "fast_io_fail_timeout_sec": 0, 00:20:13.421 "disable_auto_failback": false, 00:20:13.421 "generate_uuids": false, 00:20:13.421 "transport_tos": 0, 00:20:13.421 "nvme_error_stat": false, 00:20:13.421 "rdma_srq_size": 0, 00:20:13.421 "io_path_stat": false, 00:20:13.421 "allow_accel_sequence": false, 00:20:13.421 "rdma_max_cq_size": 0, 00:20:13.421 "rdma_cm_event_timeout_ms": 0, 00:20:13.421 "dhchap_digests": [ 00:20:13.421 "sha256", 00:20:13.421 "sha384", 00:20:13.421 "sha512" 00:20:13.421 ], 00:20:13.421 "dhchap_dhgroups": [ 00:20:13.421 "null", 00:20:13.421 "ffdhe2048", 00:20:13.421 "ffdhe3072", 00:20:13.421 "ffdhe4096", 00:20:13.421 "ffdhe6144", 00:20:13.421 "ffdhe8192" 00:20:13.421 ] 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_nvme_set_hotplug", 00:20:13.421 "params": { 00:20:13.421 "period_us": 100000, 00:20:13.421 "enable": false 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_malloc_create", 00:20:13.421 "params": { 00:20:13.421 "name": "malloc0", 00:20:13.421 "num_blocks": 8192, 00:20:13.421 "block_size": 4096, 00:20:13.421 "physical_block_size": 4096, 00:20:13.421 "uuid": "bcff440c-5553-4686-af33-e6ec634eb1f8", 00:20:13.421 "optimal_io_boundary": 0, 00:20:13.421 "md_size": 0, 00:20:13.421 "dif_type": 0, 00:20:13.421 "dif_is_head_of_md": false, 00:20:13.421 "dif_pi_format": 0 00:20:13.421 } 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "method": "bdev_wait_for_examine" 00:20:13.421 } 00:20:13.421 ] 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "subsystem": "nbd", 00:20:13.421 "config": [] 00:20:13.421 }, 00:20:13.421 { 00:20:13.421 "subsystem": "scheduler", 00:20:13.421 "config": [ 00:20:13.421 { 00:20:13.421 "method": "framework_set_scheduler", 00:20:13.421 "params": { 00:20:13.421 "name": "static" 00:20:13.421 } 00:20:13.421 } 00:20:13.422 ] 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "subsystem": "nvmf", 00:20:13.422 "config": [ 00:20:13.422 { 00:20:13.422 "method": "nvmf_set_config", 00:20:13.422 "params": { 00:20:13.422 "discovery_filter": "match_any", 00:20:13.422 "admin_cmd_passthru": { 00:20:13.422 "identify_ctrlr": false 00:20:13.422 }, 00:20:13.422 "dhchap_digests": [ 00:20:13.422 "sha256", 00:20:13.422 "sha384", 00:20:13.422 "sha512" 00:20:13.422 ], 00:20:13.422 "dhchap_dhgroups": [ 00:20:13.422 "null", 00:20:13.422 "ffdhe2048", 00:20:13.422 "ffdhe3072", 00:20:13.422 "ffdhe4096", 00:20:13.422 "ffdhe6144", 00:20:13.422 "ffdhe8192" 00:20:13.422 ] 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_set_max_subsystems", 00:20:13.422 "params": { 00:20:13.422 "max_subsystems": 1024 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_set_crdt", 00:20:13.422 "params": { 00:20:13.422 "crdt1": 0, 00:20:13.422 "crdt2": 0, 00:20:13.422 "crdt3": 0 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_create_transport", 00:20:13.422 "params": { 00:20:13.422 "trtype": "TCP", 00:20:13.422 "max_queue_depth": 128, 00:20:13.422 "max_io_qpairs_per_ctrlr": 127, 00:20:13.422 "in_capsule_data_size": 4096, 00:20:13.422 "max_io_size": 131072, 00:20:13.422 "io_unit_size": 131072, 00:20:13.422 "max_aq_depth": 128, 00:20:13.422 "num_shared_buffers": 511, 00:20:13.422 "buf_cache_size": 4294967295, 00:20:13.422 "dif_insert_or_strip": false, 00:20:13.422 "zcopy": false, 00:20:13.422 "c2h_success": false, 00:20:13.422 "sock_priority": 0, 00:20:13.422 "abort_timeout_sec": 1, 00:20:13.422 "ack_timeout": 0, 00:20:13.422 "data_wr_pool_size": 0 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_create_subsystem", 00:20:13.422 "params": { 00:20:13.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.422 "allow_any_host": false, 00:20:13.422 "serial_number": "SPDK00000000000001", 00:20:13.422 "model_number": "SPDK bdev Controller", 00:20:13.422 "max_namespaces": 10, 00:20:13.422 "min_cntlid": 1, 00:20:13.422 "max_cntlid": 65519, 00:20:13.422 "ana_reporting": false 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_subsystem_add_host", 00:20:13.422 "params": { 00:20:13.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.422 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.422 "psk": "key0" 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_subsystem_add_ns", 00:20:13.422 "params": { 00:20:13.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.422 "namespace": { 00:20:13.422 "nsid": 1, 00:20:13.422 "bdev_name": "malloc0", 00:20:13.422 "nguid": "BCFF440C55534686AF33E6EC634EB1F8", 00:20:13.422 "uuid": "bcff440c-5553-4686-af33-e6ec634eb1f8", 00:20:13.422 "no_auto_visible": false 00:20:13.422 } 00:20:13.422 } 00:20:13.422 }, 00:20:13.422 { 00:20:13.422 "method": "nvmf_subsystem_add_listener", 00:20:13.422 "params": { 00:20:13.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.422 "listen_address": { 00:20:13.422 "trtype": "TCP", 00:20:13.422 "adrfam": "IPv4", 00:20:13.422 "traddr": "10.0.0.2", 00:20:13.422 "trsvcid": "4420" 00:20:13.422 }, 00:20:13.422 "secure_channel": true 00:20:13.422 } 00:20:13.422 } 00:20:13.422 ] 00:20:13.422 } 00:20:13.422 ] 00:20:13.422 }' 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2961878 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2961878 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2961878 ']' 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.683 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.683 [2024-11-26 19:10:30.679098] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:13.683 [2024-11-26 19:10:30.679156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.683 [2024-11-26 19:10:30.768645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.683 [2024-11-26 19:10:30.797542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.683 [2024-11-26 19:10:30.797571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.683 [2024-11-26 19:10:30.797577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.683 [2024-11-26 19:10:30.797581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.683 [2024-11-26 19:10:30.797585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.683 [2024-11-26 19:10:30.798050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.943 [2024-11-26 19:10:30.991864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.943 [2024-11-26 19:10:31.023889] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.943 [2024-11-26 19:10:31.024090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2962070 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2962070 /var/tmp/bdevperf.sock 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2962070 ']' 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.521 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:14.521 "subsystems": [ 00:20:14.521 { 00:20:14.521 "subsystem": "keyring", 00:20:14.521 "config": [ 00:20:14.521 { 00:20:14.521 "method": "keyring_file_add_key", 00:20:14.521 "params": { 00:20:14.521 "name": "key0", 00:20:14.521 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:14.521 } 00:20:14.521 } 00:20:14.521 ] 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "subsystem": "iobuf", 00:20:14.521 "config": [ 00:20:14.521 { 00:20:14.521 "method": "iobuf_set_options", 00:20:14.521 "params": { 00:20:14.521 "small_pool_count": 8192, 00:20:14.521 "large_pool_count": 1024, 00:20:14.521 "small_bufsize": 8192, 00:20:14.521 "large_bufsize": 135168, 00:20:14.521 "enable_numa": false 00:20:14.521 } 00:20:14.521 } 00:20:14.521 ] 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "subsystem": "sock", 00:20:14.521 "config": [ 00:20:14.521 { 00:20:14.521 "method": "sock_set_default_impl", 00:20:14.521 "params": { 00:20:14.521 "impl_name": "posix" 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "sock_impl_set_options", 00:20:14.521 "params": { 00:20:14.521 "impl_name": "ssl", 00:20:14.521 "recv_buf_size": 4096, 00:20:14.521 "send_buf_size": 4096, 00:20:14.521 "enable_recv_pipe": true, 00:20:14.521 "enable_quickack": false, 00:20:14.521 "enable_placement_id": 0, 00:20:14.521 "enable_zerocopy_send_server": true, 00:20:14.521 "enable_zerocopy_send_client": false, 00:20:14.521 "zerocopy_threshold": 0, 00:20:14.521 "tls_version": 0, 00:20:14.521 "enable_ktls": false 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "sock_impl_set_options", 00:20:14.521 "params": { 00:20:14.521 "impl_name": "posix", 00:20:14.521 "recv_buf_size": 2097152, 00:20:14.521 "send_buf_size": 2097152, 00:20:14.521 "enable_recv_pipe": true, 00:20:14.521 "enable_quickack": false, 00:20:14.521 "enable_placement_id": 0, 00:20:14.521 "enable_zerocopy_send_server": true, 00:20:14.521 "enable_zerocopy_send_client": false, 00:20:14.521 "zerocopy_threshold": 0, 00:20:14.521 "tls_version": 0, 00:20:14.521 "enable_ktls": false 00:20:14.521 } 00:20:14.521 } 00:20:14.521 ] 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "subsystem": "vmd", 00:20:14.521 "config": [] 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "subsystem": "accel", 00:20:14.521 "config": [ 00:20:14.521 { 00:20:14.521 "method": "accel_set_options", 00:20:14.521 "params": { 00:20:14.521 "small_cache_size": 128, 00:20:14.521 "large_cache_size": 16, 00:20:14.521 "task_count": 2048, 00:20:14.521 "sequence_count": 2048, 00:20:14.521 "buf_count": 2048 00:20:14.521 } 00:20:14.521 } 00:20:14.521 ] 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "subsystem": "bdev", 00:20:14.521 "config": [ 00:20:14.521 { 00:20:14.521 "method": "bdev_set_options", 00:20:14.521 "params": { 00:20:14.521 "bdev_io_pool_size": 65535, 00:20:14.521 "bdev_io_cache_size": 256, 00:20:14.521 "bdev_auto_examine": true, 00:20:14.521 "iobuf_small_cache_size": 128, 00:20:14.521 "iobuf_large_cache_size": 16 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "bdev_raid_set_options", 00:20:14.521 "params": { 00:20:14.521 "process_window_size_kb": 1024, 00:20:14.521 "process_max_bandwidth_mb_sec": 0 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "bdev_iscsi_set_options", 00:20:14.521 "params": { 00:20:14.521 "timeout_sec": 30 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "bdev_nvme_set_options", 00:20:14.521 "params": { 00:20:14.521 "action_on_timeout": "none", 00:20:14.521 "timeout_us": 0, 00:20:14.521 "timeout_admin_us": 0, 00:20:14.521 "keep_alive_timeout_ms": 10000, 00:20:14.521 "arbitration_burst": 0, 00:20:14.521 "low_priority_weight": 0, 00:20:14.521 "medium_priority_weight": 0, 00:20:14.521 "high_priority_weight": 0, 00:20:14.521 "nvme_adminq_poll_period_us": 10000, 00:20:14.521 "nvme_ioq_poll_period_us": 0, 00:20:14.521 "io_queue_requests": 512, 00:20:14.521 "delay_cmd_submit": true, 00:20:14.521 "transport_retry_count": 4, 00:20:14.521 "bdev_retry_count": 3, 00:20:14.521 "transport_ack_timeout": 0, 00:20:14.521 "ctrlr_loss_timeout_sec": 0, 00:20:14.521 "reconnect_delay_sec": 0, 00:20:14.521 "fast_io_fail_timeout_sec": 0, 00:20:14.521 "disable_auto_failback": false, 00:20:14.521 "generate_uuids": false, 00:20:14.521 "transport_tos": 0, 00:20:14.521 "nvme_error_stat": false, 00:20:14.521 "rdma_srq_size": 0, 00:20:14.521 "io_path_stat": false, 00:20:14.521 "allow_accel_sequence": false, 00:20:14.521 "rdma_max_cq_size": 0, 00:20:14.521 "rdma_cm_event_timeout_ms": 0, 00:20:14.521 "dhchap_digests": [ 00:20:14.521 "sha256", 00:20:14.521 "sha384", 00:20:14.521 "sha512" 00:20:14.521 ], 00:20:14.521 "dhchap_dhgroups": [ 00:20:14.521 "null", 00:20:14.521 "ffdhe2048", 00:20:14.521 "ffdhe3072", 00:20:14.521 "ffdhe4096", 00:20:14.521 "ffdhe6144", 00:20:14.521 "ffdhe8192" 00:20:14.521 ] 00:20:14.521 } 00:20:14.521 }, 00:20:14.521 { 00:20:14.521 "method": "bdev_nvme_attach_controller", 00:20:14.521 "params": { 00:20:14.521 "name": "TLSTEST", 00:20:14.521 "trtype": "TCP", 00:20:14.521 "adrfam": "IPv4", 00:20:14.521 "traddr": "10.0.0.2", 00:20:14.521 "trsvcid": "4420", 00:20:14.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.521 "prchk_reftag": false, 00:20:14.521 "prchk_guard": false, 00:20:14.521 "ctrlr_loss_timeout_sec": 0, 00:20:14.521 "reconnect_delay_sec": 0, 00:20:14.521 "fast_io_fail_timeout_sec": 0, 00:20:14.521 "psk": "key0", 00:20:14.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.522 "hdgst": false, 00:20:14.522 "ddgst": false, 00:20:14.522 "multipath": "multipath" 00:20:14.522 } 00:20:14.522 }, 00:20:14.522 { 00:20:14.522 "method": "bdev_nvme_set_hotplug", 00:20:14.522 "params": { 00:20:14.522 "period_us": 100000, 00:20:14.522 "enable": false 00:20:14.522 } 00:20:14.522 }, 00:20:14.522 { 00:20:14.522 "method": "bdev_wait_for_examine" 00:20:14.522 } 00:20:14.522 ] 00:20:14.522 }, 00:20:14.522 { 00:20:14.522 "subsystem": "nbd", 00:20:14.522 "config": [] 00:20:14.522 } 00:20:14.522 ] 00:20:14.522 }' 00:20:14.522 [2024-11-26 19:10:31.559952] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:14.522 [2024-11-26 19:10:31.560004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962070 ] 00:20:14.522 [2024-11-26 19:10:31.649566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.522 [2024-11-26 19:10:31.684774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.782 [2024-11-26 19:10:31.825560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.352 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.352 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.352 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.352 Running I/O for 10 seconds... 00:20:17.232 5294.00 IOPS, 20.68 MiB/s [2024-11-26T18:10:35.828Z] 5284.50 IOPS, 20.64 MiB/s [2024-11-26T18:10:36.769Z] 5656.33 IOPS, 22.10 MiB/s [2024-11-26T18:10:37.709Z] 5870.00 IOPS, 22.93 MiB/s [2024-11-26T18:10:38.650Z] 5909.20 IOPS, 23.08 MiB/s [2024-11-26T18:10:39.592Z] 5825.00 IOPS, 22.75 MiB/s [2024-11-26T18:10:40.579Z] 5810.86 IOPS, 22.70 MiB/s [2024-11-26T18:10:41.518Z] 5842.38 IOPS, 22.82 MiB/s [2024-11-26T18:10:42.456Z] 5778.89 IOPS, 22.57 MiB/s [2024-11-26T18:10:42.456Z] 5756.20 IOPS, 22.49 MiB/s 00:20:25.243 Latency(us) 00:20:25.243 [2024-11-26T18:10:42.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.243 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.243 Verification LBA range: start 0x0 length 0x2000 00:20:25.243 TLSTESTn1 : 10.01 5762.22 22.51 0.00 0.00 22180.27 4642.13 23920.64 00:20:25.243 [2024-11-26T18:10:42.456Z] =================================================================================================================== 00:20:25.243 [2024-11-26T18:10:42.456Z] Total : 5762.22 22.51 0.00 0.00 22180.27 4642.13 23920.64 00:20:25.504 { 00:20:25.504 "results": [ 00:20:25.504 { 00:20:25.504 "job": "TLSTESTn1", 00:20:25.504 "core_mask": "0x4", 00:20:25.504 "workload": "verify", 00:20:25.504 "status": "finished", 00:20:25.504 "verify_range": { 00:20:25.504 "start": 0, 00:20:25.504 "length": 8192 00:20:25.504 }, 00:20:25.504 "queue_depth": 128, 00:20:25.504 "io_size": 4096, 00:20:25.504 "runtime": 10.011585, 00:20:25.504 "iops": 5762.224462959662, 00:20:25.504 "mibps": 22.50868930843618, 00:20:25.504 "io_failed": 0, 00:20:25.504 "io_timeout": 0, 00:20:25.504 "avg_latency_us": 22180.273226900565, 00:20:25.504 "min_latency_us": 4642.133333333333, 00:20:25.504 "max_latency_us": 23920.64 00:20:25.504 } 00:20:25.504 ], 00:20:25.505 "core_count": 1 00:20:25.505 } 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2962070 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2962070 ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2962070 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962070 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962070' 00:20:25.505 killing process with pid 2962070 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2962070 00:20:25.505 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.505 00:20:25.505 Latency(us) 00:20:25.505 [2024-11-26T18:10:42.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.505 [2024-11-26T18:10:42.718Z] =================================================================================================================== 00:20:25.505 [2024-11-26T18:10:42.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2962070 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2961878 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2961878 ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2961878 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2961878 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2961878' 00:20:25.505 killing process with pid 2961878 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2961878 00:20:25.505 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2961878 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2964262 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2964262 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2964262 ']' 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.767 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.767 [2024-11-26 19:10:42.882372] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:25.767 [2024-11-26 19:10:42.882430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.029 [2024-11-26 19:10:42.976816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.029 [2024-11-26 19:10:43.026464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.029 [2024-11-26 19:10:43.026516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.029 [2024-11-26 19:10:43.026525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.029 [2024-11-26 19:10:43.026532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.029 [2024-11-26 19:10:43.026538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.029 [2024-11-26 19:10:43.027347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.A5f6G1FWFx 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.A5f6G1FWFx 00:20:26.601 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.861 [2024-11-26 19:10:43.895136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.861 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:27.122 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:27.122 [2024-11-26 19:10:44.288137] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.122 [2024-11-26 19:10:44.288523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.122 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:27.382 malloc0 00:20:27.382 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.643 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:27.903 19:10:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2964773 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2964773 /var/tmp/bdevperf.sock 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2964773 ']' 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.165 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.165 [2024-11-26 19:10:45.174610] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:28.165 [2024-11-26 19:10:45.174682] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964773 ] 00:20:28.165 [2024-11-26 19:10:45.263333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.165 [2024-11-26 19:10:45.297272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.106 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.106 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:29.106 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:29.106 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:29.367 [2024-11-26 19:10:46.320620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.367 nvme0n1 00:20:29.367 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:29.367 Running I/O for 1 seconds... 00:20:30.568 5626.00 IOPS, 21.98 MiB/s 00:20:30.568 Latency(us) 00:20:30.568 [2024-11-26T18:10:47.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.568 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:30.568 Verification LBA range: start 0x0 length 0x2000 00:20:30.568 nvme0n1 : 1.03 5603.53 21.89 0.00 0.00 22582.30 4505.60 23702.19 00:20:30.568 [2024-11-26T18:10:47.781Z] =================================================================================================================== 00:20:30.568 [2024-11-26T18:10:47.781Z] Total : 5603.53 21.89 0.00 0.00 22582.30 4505.60 23702.19 00:20:30.568 { 00:20:30.568 "results": [ 00:20:30.568 { 00:20:30.568 "job": "nvme0n1", 00:20:30.568 "core_mask": "0x2", 00:20:30.568 "workload": "verify", 00:20:30.568 "status": "finished", 00:20:30.568 "verify_range": { 00:20:30.568 "start": 0, 00:20:30.568 "length": 8192 00:20:30.568 }, 00:20:30.568 "queue_depth": 128, 00:20:30.568 "io_size": 4096, 00:20:30.568 "runtime": 1.027031, 00:20:30.568 "iops": 5603.530954761833, 00:20:30.568 "mibps": 21.88879279203841, 00:20:30.568 "io_failed": 0, 00:20:30.568 "io_timeout": 0, 00:20:30.568 "avg_latency_us": 22582.302545033304, 00:20:30.568 "min_latency_us": 4505.6, 00:20:30.568 "max_latency_us": 23702.18666666667 00:20:30.568 } 00:20:30.568 ], 00:20:30.568 "core_count": 1 00:20:30.568 } 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2964773 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2964773 ']' 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2964773 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2964773 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2964773' 00:20:30.568 killing process with pid 2964773 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2964773 00:20:30.568 Received shutdown signal, test time was about 1.000000 seconds 00:20:30.568 00:20:30.568 Latency(us) 00:20:30.568 [2024-11-26T18:10:47.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.568 [2024-11-26T18:10:47.781Z] =================================================================================================================== 00:20:30.568 [2024-11-26T18:10:47.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2964773 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2964262 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2964262 ']' 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2964262 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.568 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2964262 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2964262' 00:20:30.829 killing process with pid 2964262 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2964262 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2964262 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2965206 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2965206 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2965206 ']' 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.829 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.829 [2024-11-26 19:10:48.000433] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:30.829 [2024-11-26 19:10:48.000499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.090 [2024-11-26 19:10:48.097802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.090 [2024-11-26 19:10:48.149071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.090 [2024-11-26 19:10:48.149121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.090 [2024-11-26 19:10:48.149129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.090 [2024-11-26 19:10:48.149136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.090 [2024-11-26 19:10:48.149142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.090 [2024-11-26 19:10:48.149908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.662 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.662 [2024-11-26 19:10:48.862103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.923 malloc0 00:20:31.923 [2024-11-26 19:10:48.892258] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.923 [2024-11-26 19:10:48.892578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2965488 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2965488 /var/tmp/bdevperf.sock 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2965488 ']' 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.923 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.923 [2024-11-26 19:10:48.975882] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:31.923 [2024-11-26 19:10:48.975942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2965488 ] 00:20:31.923 [2024-11-26 19:10:49.064681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.923 [2024-11-26 19:10:49.098681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.865 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.865 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.865 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.A5f6G1FWFx 00:20:32.865 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:33.125 [2024-11-26 19:10:50.077742] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.125 nvme0n1 00:20:33.125 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.125 Running I/O for 1 seconds... 00:20:34.066 5381.00 IOPS, 21.02 MiB/s 00:20:34.066 Latency(us) 00:20:34.066 [2024-11-26T18:10:51.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.066 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:34.066 Verification LBA range: start 0x0 length 0x2000 00:20:34.066 nvme0n1 : 1.01 5448.66 21.28 0.00 0.00 23359.75 4450.99 33641.81 00:20:34.066 [2024-11-26T18:10:51.279Z] =================================================================================================================== 00:20:34.066 [2024-11-26T18:10:51.279Z] Total : 5448.66 21.28 0.00 0.00 23359.75 4450.99 33641.81 00:20:34.066 { 00:20:34.066 "results": [ 00:20:34.066 { 00:20:34.066 "job": "nvme0n1", 00:20:34.066 "core_mask": "0x2", 00:20:34.067 "workload": "verify", 00:20:34.067 "status": "finished", 00:20:34.067 "verify_range": { 00:20:34.067 "start": 0, 00:20:34.067 "length": 8192 00:20:34.067 }, 00:20:34.067 "queue_depth": 128, 00:20:34.067 "io_size": 4096, 00:20:34.067 "runtime": 1.011257, 00:20:34.067 "iops": 5448.664385017854, 00:20:34.067 "mibps": 21.283845253975993, 00:20:34.067 "io_failed": 0, 00:20:34.067 "io_timeout": 0, 00:20:34.067 "avg_latency_us": 23359.753137326075, 00:20:34.067 "min_latency_us": 4450.986666666667, 00:20:34.067 "max_latency_us": 33641.81333333333 00:20:34.067 } 00:20:34.067 ], 00:20:34.067 "core_count": 1 00:20:34.067 } 00:20:34.328 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:34.328 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.328 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.328 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.328 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:34.328 "subsystems": [ 00:20:34.328 { 00:20:34.328 "subsystem": "keyring", 00:20:34.328 "config": [ 00:20:34.328 { 00:20:34.328 "method": "keyring_file_add_key", 00:20:34.328 "params": { 00:20:34.328 "name": "key0", 00:20:34.328 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:34.328 } 00:20:34.328 } 00:20:34.328 ] 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "subsystem": "iobuf", 00:20:34.328 "config": [ 00:20:34.328 { 00:20:34.328 "method": "iobuf_set_options", 00:20:34.328 "params": { 00:20:34.328 "small_pool_count": 8192, 00:20:34.328 "large_pool_count": 1024, 00:20:34.328 "small_bufsize": 8192, 00:20:34.328 "large_bufsize": 135168, 00:20:34.328 "enable_numa": false 00:20:34.328 } 00:20:34.328 } 00:20:34.328 ] 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "subsystem": "sock", 00:20:34.328 "config": [ 00:20:34.328 { 00:20:34.328 "method": "sock_set_default_impl", 00:20:34.328 "params": { 00:20:34.328 "impl_name": "posix" 00:20:34.328 } 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "method": "sock_impl_set_options", 00:20:34.328 "params": { 00:20:34.328 "impl_name": "ssl", 00:20:34.328 "recv_buf_size": 4096, 00:20:34.328 "send_buf_size": 4096, 00:20:34.328 "enable_recv_pipe": true, 00:20:34.328 "enable_quickack": false, 00:20:34.328 "enable_placement_id": 0, 00:20:34.328 "enable_zerocopy_send_server": true, 00:20:34.328 "enable_zerocopy_send_client": false, 00:20:34.328 "zerocopy_threshold": 0, 00:20:34.328 "tls_version": 0, 00:20:34.328 "enable_ktls": false 00:20:34.328 } 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "method": "sock_impl_set_options", 00:20:34.328 "params": { 00:20:34.328 "impl_name": "posix", 00:20:34.328 "recv_buf_size": 2097152, 00:20:34.328 "send_buf_size": 2097152, 00:20:34.328 "enable_recv_pipe": true, 00:20:34.328 "enable_quickack": false, 00:20:34.328 "enable_placement_id": 0, 00:20:34.328 "enable_zerocopy_send_server": true, 00:20:34.328 "enable_zerocopy_send_client": false, 00:20:34.328 "zerocopy_threshold": 0, 00:20:34.328 "tls_version": 0, 00:20:34.328 "enable_ktls": false 00:20:34.328 } 00:20:34.328 } 00:20:34.328 ] 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "subsystem": "vmd", 00:20:34.328 "config": [] 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "subsystem": "accel", 00:20:34.328 "config": [ 00:20:34.328 { 00:20:34.328 "method": "accel_set_options", 00:20:34.328 "params": { 00:20:34.328 "small_cache_size": 128, 00:20:34.328 "large_cache_size": 16, 00:20:34.328 "task_count": 2048, 00:20:34.328 "sequence_count": 2048, 00:20:34.328 "buf_count": 2048 00:20:34.328 } 00:20:34.328 } 00:20:34.328 ] 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "subsystem": "bdev", 00:20:34.328 "config": [ 00:20:34.328 { 00:20:34.328 "method": "bdev_set_options", 00:20:34.328 "params": { 00:20:34.328 "bdev_io_pool_size": 65535, 00:20:34.328 "bdev_io_cache_size": 256, 00:20:34.328 "bdev_auto_examine": true, 00:20:34.328 "iobuf_small_cache_size": 128, 00:20:34.328 "iobuf_large_cache_size": 16 00:20:34.328 } 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "method": "bdev_raid_set_options", 00:20:34.328 "params": { 00:20:34.328 "process_window_size_kb": 1024, 00:20:34.328 "process_max_bandwidth_mb_sec": 0 00:20:34.328 } 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "method": "bdev_iscsi_set_options", 00:20:34.328 "params": { 00:20:34.328 "timeout_sec": 30 00:20:34.328 } 00:20:34.328 }, 00:20:34.328 { 00:20:34.328 "method": "bdev_nvme_set_options", 00:20:34.328 "params": { 00:20:34.328 "action_on_timeout": "none", 00:20:34.328 "timeout_us": 0, 00:20:34.328 "timeout_admin_us": 0, 00:20:34.328 "keep_alive_timeout_ms": 10000, 00:20:34.328 "arbitration_burst": 0, 00:20:34.328 "low_priority_weight": 0, 00:20:34.328 "medium_priority_weight": 0, 00:20:34.328 "high_priority_weight": 0, 00:20:34.329 "nvme_adminq_poll_period_us": 10000, 00:20:34.329 "nvme_ioq_poll_period_us": 0, 00:20:34.329 "io_queue_requests": 0, 00:20:34.329 "delay_cmd_submit": true, 00:20:34.329 "transport_retry_count": 4, 00:20:34.329 "bdev_retry_count": 3, 00:20:34.329 "transport_ack_timeout": 0, 00:20:34.329 "ctrlr_loss_timeout_sec": 0, 00:20:34.329 "reconnect_delay_sec": 0, 00:20:34.329 "fast_io_fail_timeout_sec": 0, 00:20:34.329 "disable_auto_failback": false, 00:20:34.329 "generate_uuids": false, 00:20:34.329 "transport_tos": 0, 00:20:34.329 "nvme_error_stat": false, 00:20:34.329 "rdma_srq_size": 0, 00:20:34.329 "io_path_stat": false, 00:20:34.329 "allow_accel_sequence": false, 00:20:34.329 "rdma_max_cq_size": 0, 00:20:34.329 "rdma_cm_event_timeout_ms": 0, 00:20:34.329 "dhchap_digests": [ 00:20:34.329 "sha256", 00:20:34.329 "sha384", 00:20:34.329 "sha512" 00:20:34.329 ], 00:20:34.329 "dhchap_dhgroups": [ 00:20:34.329 "null", 00:20:34.329 "ffdhe2048", 00:20:34.329 "ffdhe3072", 00:20:34.329 "ffdhe4096", 00:20:34.329 "ffdhe6144", 00:20:34.329 "ffdhe8192" 00:20:34.329 ] 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "bdev_nvme_set_hotplug", 00:20:34.329 "params": { 00:20:34.329 "period_us": 100000, 00:20:34.329 "enable": false 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "bdev_malloc_create", 00:20:34.329 "params": { 00:20:34.329 "name": "malloc0", 00:20:34.329 "num_blocks": 8192, 00:20:34.329 "block_size": 4096, 00:20:34.329 "physical_block_size": 4096, 00:20:34.329 "uuid": "3f7eaf87-6a1c-4d6b-b679-078046a366c1", 00:20:34.329 "optimal_io_boundary": 0, 00:20:34.329 "md_size": 0, 00:20:34.329 "dif_type": 0, 00:20:34.329 "dif_is_head_of_md": false, 00:20:34.329 "dif_pi_format": 0 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "bdev_wait_for_examine" 00:20:34.329 } 00:20:34.329 ] 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "subsystem": "nbd", 00:20:34.329 "config": [] 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "subsystem": "scheduler", 00:20:34.329 "config": [ 00:20:34.329 { 00:20:34.329 "method": "framework_set_scheduler", 00:20:34.329 "params": { 00:20:34.329 "name": "static" 00:20:34.329 } 00:20:34.329 } 00:20:34.329 ] 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "subsystem": "nvmf", 00:20:34.329 "config": [ 00:20:34.329 { 00:20:34.329 "method": "nvmf_set_config", 00:20:34.329 "params": { 00:20:34.329 "discovery_filter": "match_any", 00:20:34.329 "admin_cmd_passthru": { 00:20:34.329 "identify_ctrlr": false 00:20:34.329 }, 00:20:34.329 "dhchap_digests": [ 00:20:34.329 "sha256", 00:20:34.329 "sha384", 00:20:34.329 "sha512" 00:20:34.329 ], 00:20:34.329 "dhchap_dhgroups": [ 00:20:34.329 "null", 00:20:34.329 "ffdhe2048", 00:20:34.329 "ffdhe3072", 00:20:34.329 "ffdhe4096", 00:20:34.329 "ffdhe6144", 00:20:34.329 "ffdhe8192" 00:20:34.329 ] 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_set_max_subsystems", 00:20:34.329 "params": { 00:20:34.329 "max_subsystems": 1024 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_set_crdt", 00:20:34.329 "params": { 00:20:34.329 "crdt1": 0, 00:20:34.329 "crdt2": 0, 00:20:34.329 "crdt3": 0 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_create_transport", 00:20:34.329 "params": { 00:20:34.329 "trtype": "TCP", 00:20:34.329 "max_queue_depth": 128, 00:20:34.329 "max_io_qpairs_per_ctrlr": 127, 00:20:34.329 "in_capsule_data_size": 4096, 00:20:34.329 "max_io_size": 131072, 00:20:34.329 "io_unit_size": 131072, 00:20:34.329 "max_aq_depth": 128, 00:20:34.329 "num_shared_buffers": 511, 00:20:34.329 "buf_cache_size": 4294967295, 00:20:34.329 "dif_insert_or_strip": false, 00:20:34.329 "zcopy": false, 00:20:34.329 "c2h_success": false, 00:20:34.329 "sock_priority": 0, 00:20:34.329 "abort_timeout_sec": 1, 00:20:34.329 "ack_timeout": 0, 00:20:34.329 "data_wr_pool_size": 0 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_create_subsystem", 00:20:34.329 "params": { 00:20:34.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.329 "allow_any_host": false, 00:20:34.329 "serial_number": "00000000000000000000", 00:20:34.329 "model_number": "SPDK bdev Controller", 00:20:34.329 "max_namespaces": 32, 00:20:34.329 "min_cntlid": 1, 00:20:34.329 "max_cntlid": 65519, 00:20:34.329 "ana_reporting": false 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_subsystem_add_host", 00:20:34.329 "params": { 00:20:34.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.329 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.329 "psk": "key0" 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_subsystem_add_ns", 00:20:34.329 "params": { 00:20:34.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.329 "namespace": { 00:20:34.329 "nsid": 1, 00:20:34.329 "bdev_name": "malloc0", 00:20:34.329 "nguid": "3F7EAF876A1C4D6BB679078046A366C1", 00:20:34.329 "uuid": "3f7eaf87-6a1c-4d6b-b679-078046a366c1", 00:20:34.329 "no_auto_visible": false 00:20:34.329 } 00:20:34.329 } 00:20:34.329 }, 00:20:34.329 { 00:20:34.329 "method": "nvmf_subsystem_add_listener", 00:20:34.329 "params": { 00:20:34.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.329 "listen_address": { 00:20:34.329 "trtype": "TCP", 00:20:34.329 "adrfam": "IPv4", 00:20:34.329 "traddr": "10.0.0.2", 00:20:34.330 "trsvcid": "4420" 00:20:34.330 }, 00:20:34.330 "secure_channel": false, 00:20:34.330 "sock_impl": "ssl" 00:20:34.330 } 00:20:34.330 } 00:20:34.330 ] 00:20:34.330 } 00:20:34.330 ] 00:20:34.330 }' 00:20:34.330 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:34.592 "subsystems": [ 00:20:34.592 { 00:20:34.592 "subsystem": "keyring", 00:20:34.592 "config": [ 00:20:34.592 { 00:20:34.592 "method": "keyring_file_add_key", 00:20:34.592 "params": { 00:20:34.592 "name": "key0", 00:20:34.592 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:34.592 } 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "iobuf", 00:20:34.592 "config": [ 00:20:34.592 { 00:20:34.592 "method": "iobuf_set_options", 00:20:34.592 "params": { 00:20:34.592 "small_pool_count": 8192, 00:20:34.592 "large_pool_count": 1024, 00:20:34.592 "small_bufsize": 8192, 00:20:34.592 "large_bufsize": 135168, 00:20:34.592 "enable_numa": false 00:20:34.592 } 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "sock", 00:20:34.592 "config": [ 00:20:34.592 { 00:20:34.592 "method": "sock_set_default_impl", 00:20:34.592 "params": { 00:20:34.592 "impl_name": "posix" 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "sock_impl_set_options", 00:20:34.592 "params": { 00:20:34.592 "impl_name": "ssl", 00:20:34.592 "recv_buf_size": 4096, 00:20:34.592 "send_buf_size": 4096, 00:20:34.592 "enable_recv_pipe": true, 00:20:34.592 "enable_quickack": false, 00:20:34.592 "enable_placement_id": 0, 00:20:34.592 "enable_zerocopy_send_server": true, 00:20:34.592 "enable_zerocopy_send_client": false, 00:20:34.592 "zerocopy_threshold": 0, 00:20:34.592 "tls_version": 0, 00:20:34.592 "enable_ktls": false 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "sock_impl_set_options", 00:20:34.592 "params": { 00:20:34.592 "impl_name": "posix", 00:20:34.592 "recv_buf_size": 2097152, 00:20:34.592 "send_buf_size": 2097152, 00:20:34.592 "enable_recv_pipe": true, 00:20:34.592 "enable_quickack": false, 00:20:34.592 "enable_placement_id": 0, 00:20:34.592 "enable_zerocopy_send_server": true, 00:20:34.592 "enable_zerocopy_send_client": false, 00:20:34.592 "zerocopy_threshold": 0, 00:20:34.592 "tls_version": 0, 00:20:34.592 "enable_ktls": false 00:20:34.592 } 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "vmd", 00:20:34.592 "config": [] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "accel", 00:20:34.592 "config": [ 00:20:34.592 { 00:20:34.592 "method": "accel_set_options", 00:20:34.592 "params": { 00:20:34.592 "small_cache_size": 128, 00:20:34.592 "large_cache_size": 16, 00:20:34.592 "task_count": 2048, 00:20:34.592 "sequence_count": 2048, 00:20:34.592 "buf_count": 2048 00:20:34.592 } 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "bdev", 00:20:34.592 "config": [ 00:20:34.592 { 00:20:34.592 "method": "bdev_set_options", 00:20:34.592 "params": { 00:20:34.592 "bdev_io_pool_size": 65535, 00:20:34.592 "bdev_io_cache_size": 256, 00:20:34.592 "bdev_auto_examine": true, 00:20:34.592 "iobuf_small_cache_size": 128, 00:20:34.592 "iobuf_large_cache_size": 16 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_raid_set_options", 00:20:34.592 "params": { 00:20:34.592 "process_window_size_kb": 1024, 00:20:34.592 "process_max_bandwidth_mb_sec": 0 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_iscsi_set_options", 00:20:34.592 "params": { 00:20:34.592 "timeout_sec": 30 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_nvme_set_options", 00:20:34.592 "params": { 00:20:34.592 "action_on_timeout": "none", 00:20:34.592 "timeout_us": 0, 00:20:34.592 "timeout_admin_us": 0, 00:20:34.592 "keep_alive_timeout_ms": 10000, 00:20:34.592 "arbitration_burst": 0, 00:20:34.592 "low_priority_weight": 0, 00:20:34.592 "medium_priority_weight": 0, 00:20:34.592 "high_priority_weight": 0, 00:20:34.592 "nvme_adminq_poll_period_us": 10000, 00:20:34.592 "nvme_ioq_poll_period_us": 0, 00:20:34.592 "io_queue_requests": 512, 00:20:34.592 "delay_cmd_submit": true, 00:20:34.592 "transport_retry_count": 4, 00:20:34.592 "bdev_retry_count": 3, 00:20:34.592 "transport_ack_timeout": 0, 00:20:34.592 "ctrlr_loss_timeout_sec": 0, 00:20:34.592 "reconnect_delay_sec": 0, 00:20:34.592 "fast_io_fail_timeout_sec": 0, 00:20:34.592 "disable_auto_failback": false, 00:20:34.592 "generate_uuids": false, 00:20:34.592 "transport_tos": 0, 00:20:34.592 "nvme_error_stat": false, 00:20:34.592 "rdma_srq_size": 0, 00:20:34.592 "io_path_stat": false, 00:20:34.592 "allow_accel_sequence": false, 00:20:34.592 "rdma_max_cq_size": 0, 00:20:34.592 "rdma_cm_event_timeout_ms": 0, 00:20:34.592 "dhchap_digests": [ 00:20:34.592 "sha256", 00:20:34.592 "sha384", 00:20:34.592 "sha512" 00:20:34.592 ], 00:20:34.592 "dhchap_dhgroups": [ 00:20:34.592 "null", 00:20:34.592 "ffdhe2048", 00:20:34.592 "ffdhe3072", 00:20:34.592 "ffdhe4096", 00:20:34.592 "ffdhe6144", 00:20:34.592 "ffdhe8192" 00:20:34.592 ] 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_nvme_attach_controller", 00:20:34.592 "params": { 00:20:34.592 "name": "nvme0", 00:20:34.592 "trtype": "TCP", 00:20:34.592 "adrfam": "IPv4", 00:20:34.592 "traddr": "10.0.0.2", 00:20:34.592 "trsvcid": "4420", 00:20:34.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.592 "prchk_reftag": false, 00:20:34.592 "prchk_guard": false, 00:20:34.592 "ctrlr_loss_timeout_sec": 0, 00:20:34.592 "reconnect_delay_sec": 0, 00:20:34.592 "fast_io_fail_timeout_sec": 0, 00:20:34.592 "psk": "key0", 00:20:34.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.592 "hdgst": false, 00:20:34.592 "ddgst": false, 00:20:34.592 "multipath": "multipath" 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_nvme_set_hotplug", 00:20:34.592 "params": { 00:20:34.592 "period_us": 100000, 00:20:34.592 "enable": false 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_enable_histogram", 00:20:34.592 "params": { 00:20:34.592 "name": "nvme0n1", 00:20:34.592 "enable": true 00:20:34.592 } 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "method": "bdev_wait_for_examine" 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }, 00:20:34.592 { 00:20:34.592 "subsystem": "nbd", 00:20:34.592 "config": [] 00:20:34.592 } 00:20:34.592 ] 00:20:34.592 }' 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2965488 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2965488 ']' 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2965488 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.592 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2965488 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2965488' 00:20:34.593 killing process with pid 2965488 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2965488 00:20:34.593 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.593 00:20:34.593 Latency(us) 00:20:34.593 [2024-11-26T18:10:51.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.593 [2024-11-26T18:10:51.806Z] =================================================================================================================== 00:20:34.593 [2024-11-26T18:10:51.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.593 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2965488 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2965206 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2965206 ']' 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2965206 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2965206 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2965206' 00:20:34.853 killing process with pid 2965206 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2965206 00:20:34.853 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2965206 00:20:34.853 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:34.854 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.854 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.854 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.854 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:34.854 "subsystems": [ 00:20:34.854 { 00:20:34.854 "subsystem": "keyring", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "keyring_file_add_key", 00:20:34.854 "params": { 00:20:34.854 "name": "key0", 00:20:34.854 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:34.854 } 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "iobuf", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "iobuf_set_options", 00:20:34.854 "params": { 00:20:34.854 "small_pool_count": 8192, 00:20:34.854 "large_pool_count": 1024, 00:20:34.854 "small_bufsize": 8192, 00:20:34.854 "large_bufsize": 135168, 00:20:34.854 "enable_numa": false 00:20:34.854 } 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "sock", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "sock_set_default_impl", 00:20:34.854 "params": { 00:20:34.854 "impl_name": "posix" 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "sock_impl_set_options", 00:20:34.854 "params": { 00:20:34.854 "impl_name": "ssl", 00:20:34.854 "recv_buf_size": 4096, 00:20:34.854 "send_buf_size": 4096, 00:20:34.854 "enable_recv_pipe": true, 00:20:34.854 "enable_quickack": false, 00:20:34.854 "enable_placement_id": 0, 00:20:34.854 "enable_zerocopy_send_server": true, 00:20:34.854 "enable_zerocopy_send_client": false, 00:20:34.854 "zerocopy_threshold": 0, 00:20:34.854 "tls_version": 0, 00:20:34.854 "enable_ktls": false 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "sock_impl_set_options", 00:20:34.854 "params": { 00:20:34.854 "impl_name": "posix", 00:20:34.854 "recv_buf_size": 2097152, 00:20:34.854 "send_buf_size": 2097152, 00:20:34.854 "enable_recv_pipe": true, 00:20:34.854 "enable_quickack": false, 00:20:34.854 "enable_placement_id": 0, 00:20:34.854 "enable_zerocopy_send_server": true, 00:20:34.854 "enable_zerocopy_send_client": false, 00:20:34.854 "zerocopy_threshold": 0, 00:20:34.854 "tls_version": 0, 00:20:34.854 "enable_ktls": false 00:20:34.854 } 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "vmd", 00:20:34.854 "config": [] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "accel", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "accel_set_options", 00:20:34.854 "params": { 00:20:34.854 "small_cache_size": 128, 00:20:34.854 "large_cache_size": 16, 00:20:34.854 "task_count": 2048, 00:20:34.854 "sequence_count": 2048, 00:20:34.854 "buf_count": 2048 00:20:34.854 } 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "bdev", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "bdev_set_options", 00:20:34.854 "params": { 00:20:34.854 "bdev_io_pool_size": 65535, 00:20:34.854 "bdev_io_cache_size": 256, 00:20:34.854 "bdev_auto_examine": true, 00:20:34.854 "iobuf_small_cache_size": 128, 00:20:34.854 "iobuf_large_cache_size": 16 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_raid_set_options", 00:20:34.854 "params": { 00:20:34.854 "process_window_size_kb": 1024, 00:20:34.854 "process_max_bandwidth_mb_sec": 0 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_iscsi_set_options", 00:20:34.854 "params": { 00:20:34.854 "timeout_sec": 30 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_nvme_set_options", 00:20:34.854 "params": { 00:20:34.854 "action_on_timeout": "none", 00:20:34.854 "timeout_us": 0, 00:20:34.854 "timeout_admin_us": 0, 00:20:34.854 "keep_alive_timeout_ms": 10000, 00:20:34.854 "arbitration_burst": 0, 00:20:34.854 "low_priority_weight": 0, 00:20:34.854 "medium_priority_weight": 0, 00:20:34.854 "high_priority_weight": 0, 00:20:34.854 "nvme_adminq_poll_period_us": 10000, 00:20:34.854 "nvme_ioq_poll_period_us": 0, 00:20:34.854 "io_queue_requests": 0, 00:20:34.854 "delay_cmd_submit": true, 00:20:34.854 "transport_retry_count": 4, 00:20:34.854 "bdev_retry_count": 3, 00:20:34.854 "transport_ack_timeout": 0, 00:20:34.854 "ctrlr_loss_timeout_sec": 0, 00:20:34.854 "reconnect_delay_sec": 0, 00:20:34.854 "fast_io_fail_timeout_sec": 0, 00:20:34.854 "disable_auto_failback": false, 00:20:34.854 "generate_uuids": false, 00:20:34.854 "transport_tos": 0, 00:20:34.854 "nvme_error_stat": false, 00:20:34.854 "rdma_srq_size": 0, 00:20:34.854 "io_path_stat": false, 00:20:34.854 "allow_accel_sequence": false, 00:20:34.854 "rdma_max_cq_size": 0, 00:20:34.854 "rdma_cm_event_timeout_ms": 0, 00:20:34.854 "dhchap_digests": [ 00:20:34.854 "sha256", 00:20:34.854 "sha384", 00:20:34.854 "sha512" 00:20:34.854 ], 00:20:34.854 "dhchap_dhgroups": [ 00:20:34.854 "null", 00:20:34.854 "ffdhe2048", 00:20:34.854 "ffdhe3072", 00:20:34.854 "ffdhe4096", 00:20:34.854 "ffdhe6144", 00:20:34.854 "ffdhe8192" 00:20:34.854 ] 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_nvme_set_hotplug", 00:20:34.854 "params": { 00:20:34.854 "period_us": 100000, 00:20:34.854 "enable": false 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_malloc_create", 00:20:34.854 "params": { 00:20:34.854 "name": "malloc0", 00:20:34.854 "num_blocks": 8192, 00:20:34.854 "block_size": 4096, 00:20:34.854 "physical_block_size": 4096, 00:20:34.854 "uuid": "3f7eaf87-6a1c-4d6b-b679-078046a366c1", 00:20:34.854 "optimal_io_boundary": 0, 00:20:34.854 "md_size": 0, 00:20:34.854 "dif_type": 0, 00:20:34.854 "dif_is_head_of_md": false, 00:20:34.854 "dif_pi_format": 0 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "bdev_wait_for_examine" 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "nbd", 00:20:34.854 "config": [] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "scheduler", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "framework_set_scheduler", 00:20:34.854 "params": { 00:20:34.854 "name": "static" 00:20:34.854 } 00:20:34.854 } 00:20:34.854 ] 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "subsystem": "nvmf", 00:20:34.854 "config": [ 00:20:34.854 { 00:20:34.854 "method": "nvmf_set_config", 00:20:34.854 "params": { 00:20:34.854 "discovery_filter": "match_any", 00:20:34.854 "admin_cmd_passthru": { 00:20:34.854 "identify_ctrlr": false 00:20:34.854 }, 00:20:34.854 "dhchap_digests": [ 00:20:34.854 "sha256", 00:20:34.854 "sha384", 00:20:34.854 "sha512" 00:20:34.854 ], 00:20:34.854 "dhchap_dhgroups": [ 00:20:34.854 "null", 00:20:34.854 "ffdhe2048", 00:20:34.854 "ffdhe3072", 00:20:34.854 "ffdhe4096", 00:20:34.854 "ffdhe6144", 00:20:34.854 "ffdhe8192" 00:20:34.854 ] 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "nvmf_set_max_subsystems", 00:20:34.854 "params": { 00:20:34.854 "max_subsystems": 1024 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "nvmf_set_crdt", 00:20:34.854 "params": { 00:20:34.854 "crdt1": 0, 00:20:34.854 "crdt2": 0, 00:20:34.854 "crdt3": 0 00:20:34.854 } 00:20:34.854 }, 00:20:34.854 { 00:20:34.854 "method": "nvmf_create_transport", 00:20:34.854 "params": { 00:20:34.854 "trtype": "TCP", 00:20:34.854 "max_queue_depth": 128, 00:20:34.854 "max_io_qpairs_per_ctrlr": 127, 00:20:34.854 "in_capsule_data_size": 4096, 00:20:34.854 "max_io_size": 131072, 00:20:34.854 "io_unit_size": 131072, 00:20:34.854 "max_aq_depth": 128, 00:20:34.854 "num_shared_buffers": 511, 00:20:34.854 "buf_cache_size": 4294967295, 00:20:34.854 "dif_insert_or_strip": false, 00:20:34.854 "zcopy": false, 00:20:34.855 "c2h_success": false, 00:20:34.855 "sock_priority": 0, 00:20:34.855 "abort_timeout_sec": 1, 00:20:34.855 "ack_timeout": 0, 00:20:34.855 "data_wr_pool_size": 0 00:20:34.855 } 00:20:34.855 }, 00:20:34.855 { 00:20:34.855 "method": "nvmf_create_subsystem", 00:20:34.855 "params": { 00:20:34.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.855 "allow_any_host": false, 00:20:34.855 "serial_number": "00000000000000000000", 00:20:34.855 "model_number": "SPDK bdev Controller", 00:20:34.855 "max_namespaces": 32, 00:20:34.855 "min_cntlid": 1, 00:20:34.855 "max_cntlid": 65519, 00:20:34.855 "ana_reporting": false 00:20:34.855 } 00:20:34.855 }, 00:20:34.855 { 00:20:34.855 "method": "nvmf_subsystem_add_host", 00:20:34.855 "params": { 00:20:34.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.855 "host": "nqn.2016-06.io.spdk:host1", 00:20:34.855 "psk": "key0" 00:20:34.855 } 00:20:34.855 }, 00:20:34.855 { 00:20:34.855 "method": "nvmf_subsystem_add_ns", 00:20:34.855 "params": { 00:20:34.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.855 "namespace": { 00:20:34.855 "nsid": 1, 00:20:34.855 "bdev_name": "malloc0", 00:20:34.855 "nguid": "3F7EAF876A1C4D6BB679078046A366C1", 00:20:34.855 "uuid": "3f7eaf87-6a1c-4d6b-b679-078046a366c1", 00:20:34.855 "no_auto_visible": false 00:20:34.855 } 00:20:34.855 } 00:20:34.855 }, 00:20:34.855 { 00:20:34.855 "method": "nvmf_subsystem_add_listener", 00:20:34.855 "params": { 00:20:34.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.855 "listen_address": { 00:20:34.855 "trtype": "TCP", 00:20:34.855 "adrfam": "IPv4", 00:20:34.855 "traddr": "10.0.0.2", 00:20:34.855 "trsvcid": "4420" 00:20:34.855 }, 00:20:34.855 "secure_channel": false, 00:20:34.855 "sock_impl": "ssl" 00:20:34.855 } 00:20:34.855 } 00:20:34.855 ] 00:20:34.855 } 00:20:34.855 ] 00:20:34.855 }' 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2966171 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2966171 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2966171 ']' 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.855 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.116 [2024-11-26 19:10:52.084260] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:35.116 [2024-11-26 19:10:52.084318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.116 [2024-11-26 19:10:52.174135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.116 [2024-11-26 19:10:52.202633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.116 [2024-11-26 19:10:52.202661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.116 [2024-11-26 19:10:52.202666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.116 [2024-11-26 19:10:52.202671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.116 [2024-11-26 19:10:52.202675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.116 [2024-11-26 19:10:52.203136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.376 [2024-11-26 19:10:52.397223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.376 [2024-11-26 19:10:52.429256] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.376 [2024-11-26 19:10:52.429453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2966199 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2966199 /var/tmp/bdevperf.sock 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2966199 ']' 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.947 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:35.947 "subsystems": [ 00:20:35.947 { 00:20:35.947 "subsystem": "keyring", 00:20:35.947 "config": [ 00:20:35.947 { 00:20:35.947 "method": "keyring_file_add_key", 00:20:35.947 "params": { 00:20:35.947 "name": "key0", 00:20:35.947 "path": "/tmp/tmp.A5f6G1FWFx" 00:20:35.947 } 00:20:35.947 } 00:20:35.947 ] 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "subsystem": "iobuf", 00:20:35.947 "config": [ 00:20:35.947 { 00:20:35.947 "method": "iobuf_set_options", 00:20:35.947 "params": { 00:20:35.947 "small_pool_count": 8192, 00:20:35.947 "large_pool_count": 1024, 00:20:35.947 "small_bufsize": 8192, 00:20:35.947 "large_bufsize": 135168, 00:20:35.947 "enable_numa": false 00:20:35.947 } 00:20:35.947 } 00:20:35.947 ] 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "subsystem": "sock", 00:20:35.947 "config": [ 00:20:35.947 { 00:20:35.947 "method": "sock_set_default_impl", 00:20:35.947 "params": { 00:20:35.947 "impl_name": "posix" 00:20:35.947 } 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "method": "sock_impl_set_options", 00:20:35.947 "params": { 00:20:35.947 "impl_name": "ssl", 00:20:35.947 "recv_buf_size": 4096, 00:20:35.947 "send_buf_size": 4096, 00:20:35.947 "enable_recv_pipe": true, 00:20:35.947 "enable_quickack": false, 00:20:35.947 "enable_placement_id": 0, 00:20:35.947 "enable_zerocopy_send_server": true, 00:20:35.947 "enable_zerocopy_send_client": false, 00:20:35.947 "zerocopy_threshold": 0, 00:20:35.947 "tls_version": 0, 00:20:35.947 "enable_ktls": false 00:20:35.947 } 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "method": "sock_impl_set_options", 00:20:35.947 "params": { 00:20:35.947 "impl_name": "posix", 00:20:35.947 "recv_buf_size": 2097152, 00:20:35.947 "send_buf_size": 2097152, 00:20:35.947 "enable_recv_pipe": true, 00:20:35.947 "enable_quickack": false, 00:20:35.947 "enable_placement_id": 0, 00:20:35.947 "enable_zerocopy_send_server": true, 00:20:35.947 "enable_zerocopy_send_client": false, 00:20:35.947 "zerocopy_threshold": 0, 00:20:35.947 "tls_version": 0, 00:20:35.947 "enable_ktls": false 00:20:35.947 } 00:20:35.947 } 00:20:35.947 ] 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "subsystem": "vmd", 00:20:35.947 "config": [] 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "subsystem": "accel", 00:20:35.947 "config": [ 00:20:35.947 { 00:20:35.947 "method": "accel_set_options", 00:20:35.947 "params": { 00:20:35.947 "small_cache_size": 128, 00:20:35.947 "large_cache_size": 16, 00:20:35.947 "task_count": 2048, 00:20:35.947 "sequence_count": 2048, 00:20:35.947 "buf_count": 2048 00:20:35.947 } 00:20:35.947 } 00:20:35.947 ] 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "subsystem": "bdev", 00:20:35.947 "config": [ 00:20:35.947 { 00:20:35.947 "method": "bdev_set_options", 00:20:35.947 "params": { 00:20:35.947 "bdev_io_pool_size": 65535, 00:20:35.947 "bdev_io_cache_size": 256, 00:20:35.947 "bdev_auto_examine": true, 00:20:35.947 "iobuf_small_cache_size": 128, 00:20:35.947 "iobuf_large_cache_size": 16 00:20:35.947 } 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "method": "bdev_raid_set_options", 00:20:35.947 "params": { 00:20:35.947 "process_window_size_kb": 1024, 00:20:35.947 "process_max_bandwidth_mb_sec": 0 00:20:35.947 } 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "method": "bdev_iscsi_set_options", 00:20:35.947 "params": { 00:20:35.947 "timeout_sec": 30 00:20:35.947 } 00:20:35.947 }, 00:20:35.947 { 00:20:35.947 "method": "bdev_nvme_set_options", 00:20:35.947 "params": { 00:20:35.948 "action_on_timeout": "none", 00:20:35.948 "timeout_us": 0, 00:20:35.948 "timeout_admin_us": 0, 00:20:35.948 "keep_alive_timeout_ms": 10000, 00:20:35.948 "arbitration_burst": 0, 00:20:35.948 "low_priority_weight": 0, 00:20:35.948 "medium_priority_weight": 0, 00:20:35.948 "high_priority_weight": 0, 00:20:35.948 "nvme_adminq_poll_period_us": 10000, 00:20:35.948 "nvme_ioq_poll_period_us": 0, 00:20:35.948 "io_queue_requests": 512, 00:20:35.948 "delay_cmd_submit": true, 00:20:35.948 "transport_retry_count": 4, 00:20:35.948 "bdev_retry_count": 3, 00:20:35.948 "transport_ack_timeout": 0, 00:20:35.948 "ctrlr_loss_timeout_sec": 0, 00:20:35.948 "reconnect_delay_sec": 0, 00:20:35.948 "fast_io_fail_timeout_sec": 0, 00:20:35.948 "disable_auto_failback": false, 00:20:35.948 "generate_uuids": false, 00:20:35.948 "transport_tos": 0, 00:20:35.948 "nvme_error_stat": false, 00:20:35.948 "rdma_srq_size": 0, 00:20:35.948 "io_path_stat": false, 00:20:35.948 "allow_accel_sequence": false, 00:20:35.948 "rdma_max_cq_size": 0, 00:20:35.948 "rdma_cm_event_timeout_ms": 0, 00:20:35.948 "dhchap_digests": [ 00:20:35.948 "sha256", 00:20:35.948 "sha384", 00:20:35.948 "sha512" 00:20:35.948 ], 00:20:35.948 "dhchap_dhgroups": [ 00:20:35.948 "null", 00:20:35.948 "ffdhe2048", 00:20:35.948 "ffdhe3072", 00:20:35.948 "ffdhe4096", 00:20:35.948 "ffdhe6144", 00:20:35.948 "ffdhe8192" 00:20:35.948 ] 00:20:35.948 } 00:20:35.948 }, 00:20:35.948 { 00:20:35.948 "method": "bdev_nvme_attach_controller", 00:20:35.948 "params": { 00:20:35.948 "name": "nvme0", 00:20:35.948 "trtype": "TCP", 00:20:35.948 "adrfam": "IPv4", 00:20:35.948 "traddr": "10.0.0.2", 00:20:35.948 "trsvcid": "4420", 00:20:35.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.948 "prchk_reftag": false, 00:20:35.948 "prchk_guard": false, 00:20:35.948 "ctrlr_loss_timeout_sec": 0, 00:20:35.948 "reconnect_delay_sec": 0, 00:20:35.948 "fast_io_fail_timeout_sec": 0, 00:20:35.948 "psk": "key0", 00:20:35.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.948 "hdgst": false, 00:20:35.948 "ddgst": false, 00:20:35.948 "multipath": "multipath" 00:20:35.948 } 00:20:35.948 }, 00:20:35.948 { 00:20:35.948 "method": "bdev_nvme_set_hotplug", 00:20:35.948 "params": { 00:20:35.948 "period_us": 100000, 00:20:35.948 "enable": false 00:20:35.948 } 00:20:35.948 }, 00:20:35.948 { 00:20:35.948 "method": "bdev_enable_histogram", 00:20:35.948 "params": { 00:20:35.948 "name": "nvme0n1", 00:20:35.948 "enable": true 00:20:35.948 } 00:20:35.948 }, 00:20:35.948 { 00:20:35.948 "method": "bdev_wait_for_examine" 00:20:35.948 } 00:20:35.948 ] 00:20:35.948 }, 00:20:35.948 { 00:20:35.948 "subsystem": "nbd", 00:20:35.948 "config": [] 00:20:35.948 } 00:20:35.948 ] 00:20:35.948 }' 00:20:35.948 [2024-11-26 19:10:52.959256] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:35.948 [2024-11-26 19:10:52.959311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966199 ] 00:20:35.948 [2024-11-26 19:10:53.042754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.948 [2024-11-26 19:10:53.072836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.209 [2024-11-26 19:10:53.208883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.781 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.042 Running I/O for 1 seconds... 00:20:37.983 5415.00 IOPS, 21.15 MiB/s 00:20:37.983 Latency(us) 00:20:37.983 [2024-11-26T18:10:55.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.983 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.983 Verification LBA range: start 0x0 length 0x2000 00:20:37.983 nvme0n1 : 1.01 5468.69 21.36 0.00 0.00 23251.89 5434.03 48496.64 00:20:37.983 [2024-11-26T18:10:55.196Z] =================================================================================================================== 00:20:37.983 [2024-11-26T18:10:55.196Z] Total : 5468.69 21.36 0.00 0.00 23251.89 5434.03 48496.64 00:20:37.983 { 00:20:37.983 "results": [ 00:20:37.983 { 00:20:37.983 "job": "nvme0n1", 00:20:37.983 "core_mask": "0x2", 00:20:37.983 "workload": "verify", 00:20:37.983 "status": "finished", 00:20:37.983 "verify_range": { 00:20:37.983 "start": 0, 00:20:37.983 "length": 8192 00:20:37.983 }, 00:20:37.983 "queue_depth": 128, 00:20:37.983 "io_size": 4096, 00:20:37.983 "runtime": 1.013589, 00:20:37.983 "iops": 5468.686025598146, 00:20:37.983 "mibps": 21.36205478749276, 00:20:37.983 "io_failed": 0, 00:20:37.983 "io_timeout": 0, 00:20:37.983 "avg_latency_us": 23251.885299176138, 00:20:37.983 "min_latency_us": 5434.026666666667, 00:20:37.983 "max_latency_us": 48496.64 00:20:37.983 } 00:20:37.983 ], 00:20:37.983 "core_count": 1 00:20:37.983 } 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:37.983 nvmf_trace.0 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2966199 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2966199 ']' 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2966199 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.983 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2966199 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2966199' 00:20:38.244 killing process with pid 2966199 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2966199 00:20:38.244 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.244 00:20:38.244 Latency(us) 00:20:38.244 [2024-11-26T18:10:55.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.244 [2024-11-26T18:10:55.457Z] =================================================================================================================== 00:20:38.244 [2024-11-26T18:10:55.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2966199 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.244 rmmod nvme_tcp 00:20:38.244 rmmod nvme_fabrics 00:20:38.244 rmmod nvme_keyring 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2966171 ']' 00:20:38.244 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2966171 00:20:38.245 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2966171 ']' 00:20:38.245 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2966171 00:20:38.245 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.245 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.245 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2966171 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2966171' 00:20:38.505 killing process with pid 2966171 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2966171 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2966171 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.505 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.oAkoTCNcKN /tmp/tmp.L5PqOGh5CM /tmp/tmp.A5f6G1FWFx 00:20:41.049 00:20:41.049 real 1m27.360s 00:20:41.049 user 2m18.499s 00:20:41.049 sys 0m26.773s 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.049 ************************************ 00:20:41.049 END TEST nvmf_tls 00:20:41.049 ************************************ 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.049 ************************************ 00:20:41.049 START TEST nvmf_fips 00:20:41.049 ************************************ 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:41.049 * Looking for test storage... 00:20:41.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.049 --rc genhtml_branch_coverage=1 00:20:41.049 --rc genhtml_function_coverage=1 00:20:41.049 --rc genhtml_legend=1 00:20:41.049 --rc geninfo_all_blocks=1 00:20:41.049 --rc geninfo_unexecuted_blocks=1 00:20:41.049 00:20:41.049 ' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.049 --rc genhtml_branch_coverage=1 00:20:41.049 --rc genhtml_function_coverage=1 00:20:41.049 --rc genhtml_legend=1 00:20:41.049 --rc geninfo_all_blocks=1 00:20:41.049 --rc geninfo_unexecuted_blocks=1 00:20:41.049 00:20:41.049 ' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.049 --rc genhtml_branch_coverage=1 00:20:41.049 --rc genhtml_function_coverage=1 00:20:41.049 --rc genhtml_legend=1 00:20:41.049 --rc geninfo_all_blocks=1 00:20:41.049 --rc geninfo_unexecuted_blocks=1 00:20:41.049 00:20:41.049 ' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.049 --rc genhtml_branch_coverage=1 00:20:41.049 --rc genhtml_function_coverage=1 00:20:41.049 --rc genhtml_legend=1 00:20:41.049 --rc geninfo_all_blocks=1 00:20:41.049 --rc geninfo_unexecuted_blocks=1 00:20:41.049 00:20:41.049 ' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.049 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:41.050 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:41.050 Error setting digest 00:20:41.050 40727348687F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:41.050 40727348687F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.050 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.051 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:49.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:49.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.324 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:49.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:49.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:20:49.325 00:20:49.325 --- 10.0.0.2 ping statistics --- 00:20:49.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.325 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:20:49.325 00:20:49.325 --- 10.0.0.1 ping statistics --- 00:20:49.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.325 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2970987 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2970987 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2970987 ']' 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.325 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.325 [2024-11-26 19:11:05.781100] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:49.325 [2024-11-26 19:11:05.781183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.325 [2024-11-26 19:11:05.885170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.325 [2024-11-26 19:11:05.936064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.325 [2024-11-26 19:11:05.936114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.325 [2024-11-26 19:11:05.936122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.325 [2024-11-26 19:11:05.936130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.325 [2024-11-26 19:11:05.936137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.325 [2024-11-26 19:11:05.936895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nIE 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nIE 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nIE 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nIE 00:20:49.586 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:49.847 [2024-11-26 19:11:06.816665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.847 [2024-11-26 19:11:06.832675] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.847 [2024-11-26 19:11:06.833014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.847 malloc0 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2971266 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2971266 /var/tmp/bdevperf.sock 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2971266 ']' 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.847 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.847 [2024-11-26 19:11:06.974705] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:20:49.847 [2024-11-26 19:11:06.974779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971266 ] 00:20:50.108 [2024-11-26 19:11:07.067727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.108 [2024-11-26 19:11:07.119073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.679 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.679 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:50.679 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nIE 00:20:50.939 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.200 [2024-11-26 19:11:08.154489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.200 TLSTESTn1 00:20:51.200 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.200 Running I/O for 10 seconds... 00:20:53.154 4764.00 IOPS, 18.61 MiB/s [2024-11-26T18:11:11.751Z] 5102.00 IOPS, 19.93 MiB/s [2024-11-26T18:11:12.688Z] 5434.00 IOPS, 21.23 MiB/s [2024-11-26T18:11:13.626Z] 5715.50 IOPS, 22.33 MiB/s [2024-11-26T18:11:14.563Z] 5775.20 IOPS, 22.56 MiB/s [2024-11-26T18:11:15.503Z] 5864.83 IOPS, 22.91 MiB/s [2024-11-26T18:11:16.444Z] 5929.29 IOPS, 23.16 MiB/s [2024-11-26T18:11:17.383Z] 5938.12 IOPS, 23.20 MiB/s [2024-11-26T18:11:18.764Z] 6001.11 IOPS, 23.44 MiB/s [2024-11-26T18:11:18.764Z] 6026.70 IOPS, 23.54 MiB/s 00:21:01.551 Latency(us) 00:21:01.551 [2024-11-26T18:11:18.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.551 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.551 Verification LBA range: start 0x0 length 0x2000 00:21:01.551 TLSTESTn1 : 10.03 6023.12 23.53 0.00 0.00 21208.19 6007.47 30801.92 00:21:01.551 [2024-11-26T18:11:18.764Z] =================================================================================================================== 00:21:01.551 [2024-11-26T18:11:18.764Z] Total : 6023.12 23.53 0.00 0.00 21208.19 6007.47 30801.92 00:21:01.551 { 00:21:01.551 "results": [ 00:21:01.551 { 00:21:01.551 "job": "TLSTESTn1", 00:21:01.551 "core_mask": "0x4", 00:21:01.551 "workload": "verify", 00:21:01.551 "status": "finished", 00:21:01.551 "verify_range": { 00:21:01.551 "start": 0, 00:21:01.551 "length": 8192 00:21:01.551 }, 00:21:01.551 "queue_depth": 128, 00:21:01.552 "io_size": 4096, 00:21:01.552 "runtime": 10.026869, 00:21:01.552 "iops": 6023.116488307566, 00:21:01.552 "mibps": 23.52779878245143, 00:21:01.552 "io_failed": 0, 00:21:01.552 "io_timeout": 0, 00:21:01.552 "avg_latency_us": 21208.190211889898, 00:21:01.552 "min_latency_us": 6007.466666666666, 00:21:01.552 "max_latency_us": 30801.92 00:21:01.552 } 00:21:01.552 ], 00:21:01.552 "core_count": 1 00:21:01.552 } 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:01.552 nvmf_trace.0 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2971266 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2971266 ']' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2971266 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2971266 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2971266' 00:21:01.552 killing process with pid 2971266 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2971266 00:21:01.552 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.552 00:21:01.552 Latency(us) 00:21:01.552 [2024-11-26T18:11:18.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.552 [2024-11-26T18:11:18.765Z] =================================================================================================================== 00:21:01.552 [2024-11-26T18:11:18.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2971266 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.552 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.552 rmmod nvme_tcp 00:21:01.552 rmmod nvme_fabrics 00:21:01.552 rmmod nvme_keyring 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2970987 ']' 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2970987 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2970987 ']' 00:21:01.812 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2970987 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970987 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970987' 00:21:01.813 killing process with pid 2970987 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2970987 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2970987 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.813 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nIE 00:21:04.360 00:21:04.360 real 0m23.301s 00:21:04.360 user 0m25.184s 00:21:04.360 sys 0m9.553s 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:04.360 ************************************ 00:21:04.360 END TEST nvmf_fips 00:21:04.360 ************************************ 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.360 ************************************ 00:21:04.360 START TEST nvmf_control_msg_list 00:21:04.360 ************************************ 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:04.360 * Looking for test storage... 00:21:04.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.360 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:04.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.361 --rc genhtml_branch_coverage=1 00:21:04.361 --rc genhtml_function_coverage=1 00:21:04.361 --rc genhtml_legend=1 00:21:04.361 --rc geninfo_all_blocks=1 00:21:04.361 --rc geninfo_unexecuted_blocks=1 00:21:04.361 00:21:04.361 ' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:04.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.361 --rc genhtml_branch_coverage=1 00:21:04.361 --rc genhtml_function_coverage=1 00:21:04.361 --rc genhtml_legend=1 00:21:04.361 --rc geninfo_all_blocks=1 00:21:04.361 --rc geninfo_unexecuted_blocks=1 00:21:04.361 00:21:04.361 ' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:04.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.361 --rc genhtml_branch_coverage=1 00:21:04.361 --rc genhtml_function_coverage=1 00:21:04.361 --rc genhtml_legend=1 00:21:04.361 --rc geninfo_all_blocks=1 00:21:04.361 --rc geninfo_unexecuted_blocks=1 00:21:04.361 00:21:04.361 ' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:04.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.361 --rc genhtml_branch_coverage=1 00:21:04.361 --rc genhtml_function_coverage=1 00:21:04.361 --rc genhtml_legend=1 00:21:04.361 --rc geninfo_all_blocks=1 00:21:04.361 --rc geninfo_unexecuted_blocks=1 00:21:04.361 00:21:04.361 ' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.361 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.362 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.362 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:12.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:12.503 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:12.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:12.503 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:12.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:21:12.504 00:21:12.504 --- 10.0.0.2 ping statistics --- 00:21:12.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.504 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:21:12.504 00:21:12.504 --- 10.0.0.1 ping statistics --- 00:21:12.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.504 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2977706 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2977706 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2977706 ']' 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.504 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.504 [2024-11-26 19:11:28.933612] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:21:12.504 [2024-11-26 19:11:28.933679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.504 [2024-11-26 19:11:29.035187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.504 [2024-11-26 19:11:29.086106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.504 [2024-11-26 19:11:29.086167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.504 [2024-11-26 19:11:29.086175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.504 [2024-11-26 19:11:29.086183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.504 [2024-11-26 19:11:29.086190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.504 [2024-11-26 19:11:29.086939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 [2024-11-26 19:11:29.802578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 Malloc0 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.765 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:12.766 [2024-11-26 19:11:29.856988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2977968 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2977969 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2977970 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2977968 00:21:12.766 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.766 [2024-11-26 19:11:29.947509] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.766 [2024-11-26 19:11:29.957304] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.766 [2024-11-26 19:11:29.967378] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:14.153 Initializing NVMe Controllers 00:21:14.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:14.153 Initialization complete. Launching workers. 00:21:14.153 ======================================================== 00:21:14.153 Latency(us) 00:21:14.153 Device Information : IOPS MiB/s Average min max 00:21:14.153 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 27.00 0.11 37198.84 335.31 41940.81 00:21:14.153 ======================================================== 00:21:14.153 Total : 27.00 0.11 37198.84 335.31 41940.81 00:21:14.153 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2977969 00:21:14.153 Initializing NVMe Controllers 00:21:14.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:14.153 Initialization complete. Launching workers. 00:21:14.153 ======================================================== 00:21:14.153 Latency(us) 00:21:14.153 Device Information : IOPS MiB/s Average min max 00:21:14.153 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40927.31 40708.95 41459.54 00:21:14.153 ======================================================== 00:21:14.153 Total : 25.00 0.10 40927.31 40708.95 41459.54 00:21:14.153 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2977970 00:21:14.153 Initializing NVMe Controllers 00:21:14.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:14.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:14.153 Initialization complete. Launching workers. 00:21:14.153 ======================================================== 00:21:14.153 Latency(us) 00:21:14.153 Device Information : IOPS MiB/s Average min max 00:21:14.153 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1505.00 5.88 664.16 304.54 815.73 00:21:14.153 ======================================================== 00:21:14.153 Total : 1505.00 5.88 664.16 304.54 815.73 00:21:14.153 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.153 rmmod nvme_tcp 00:21:14.153 rmmod nvme_fabrics 00:21:14.153 rmmod nvme_keyring 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2977706 ']' 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2977706 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2977706 ']' 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2977706 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977706 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977706' 00:21:14.153 killing process with pid 2977706 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2977706 00:21:14.153 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2977706 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.414 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.962 00:21:16.962 real 0m12.449s 00:21:16.962 user 0m8.067s 00:21:16.962 sys 0m6.559s 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.962 ************************************ 00:21:16.962 END TEST nvmf_control_msg_list 00:21:16.962 ************************************ 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.962 ************************************ 00:21:16.962 START TEST nvmf_wait_for_buf 00:21:16.962 ************************************ 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:16.962 * Looking for test storage... 00:21:16.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.962 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.963 --rc genhtml_branch_coverage=1 00:21:16.963 --rc genhtml_function_coverage=1 00:21:16.963 --rc genhtml_legend=1 00:21:16.963 --rc geninfo_all_blocks=1 00:21:16.963 --rc geninfo_unexecuted_blocks=1 00:21:16.963 00:21:16.963 ' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.963 --rc genhtml_branch_coverage=1 00:21:16.963 --rc genhtml_function_coverage=1 00:21:16.963 --rc genhtml_legend=1 00:21:16.963 --rc geninfo_all_blocks=1 00:21:16.963 --rc geninfo_unexecuted_blocks=1 00:21:16.963 00:21:16.963 ' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.963 --rc genhtml_branch_coverage=1 00:21:16.963 --rc genhtml_function_coverage=1 00:21:16.963 --rc genhtml_legend=1 00:21:16.963 --rc geninfo_all_blocks=1 00:21:16.963 --rc geninfo_unexecuted_blocks=1 00:21:16.963 00:21:16.963 ' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.963 --rc genhtml_branch_coverage=1 00:21:16.963 --rc genhtml_function_coverage=1 00:21:16.963 --rc genhtml_legend=1 00:21:16.963 --rc geninfo_all_blocks=1 00:21:16.963 --rc geninfo_unexecuted_blocks=1 00:21:16.963 00:21:16.963 ' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:16.963 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.964 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.103 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:25.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:25.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:25.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:25.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:21:25.104 00:21:25.104 --- 10.0.0.2 ping statistics --- 00:21:25.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.104 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:21:25.104 00:21:25.104 --- 10.0.0.1 ping statistics --- 00:21:25.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.104 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2982349 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2982349 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2982349 ']' 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.104 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.104 [2024-11-26 19:11:41.484911] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:21:25.104 [2024-11-26 19:11:41.484979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.104 [2024-11-26 19:11:41.584943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.104 [2024-11-26 19:11:41.636256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.104 [2024-11-26 19:11:41.636308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.104 [2024-11-26 19:11:41.636316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.104 [2024-11-26 19:11:41.636324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.104 [2024-11-26 19:11:41.636336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.104 [2024-11-26 19:11:41.637092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.105 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.105 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:25.105 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.105 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.105 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 Malloc0 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 [2024-11-26 19:11:42.458635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:25.368 [2024-11-26 19:11:42.494951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.368 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:25.628 [2024-11-26 19:11:42.598275] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:27.012 Initializing NVMe Controllers 00:21:27.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:27.012 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:27.012 Initialization complete. Launching workers. 00:21:27.012 ======================================================== 00:21:27.012 Latency(us) 00:21:27.012 Device Information : IOPS MiB/s Average min max 00:21:27.012 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165995.86 47861.54 191557.14 00:21:27.012 ======================================================== 00:21:27.012 Total : 25.00 3.12 165995.86 47861.54 191557.14 00:21:27.012 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:27.012 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.013 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.013 rmmod nvme_tcp 00:21:27.013 rmmod nvme_fabrics 00:21:27.013 rmmod nvme_keyring 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2982349 ']' 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2982349 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2982349 ']' 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2982349 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2982349 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2982349' 00:21:27.013 killing process with pid 2982349 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2982349 00:21:27.013 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2982349 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.274 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.185 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.185 00:21:29.185 real 0m12.702s 00:21:29.185 user 0m5.091s 00:21:29.185 sys 0m6.206s 00:21:29.185 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.185 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.185 ************************************ 00:21:29.185 END TEST nvmf_wait_for_buf 00:21:29.185 ************************************ 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.446 19:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:37.589 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:37.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:37.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:37.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:37.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.590 ************************************ 00:21:37.590 START TEST nvmf_perf_adq 00:21:37.590 ************************************ 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.590 * Looking for test storage... 00:21:37.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.590 --rc genhtml_branch_coverage=1 00:21:37.590 --rc genhtml_function_coverage=1 00:21:37.590 --rc genhtml_legend=1 00:21:37.590 --rc geninfo_all_blocks=1 00:21:37.590 --rc geninfo_unexecuted_blocks=1 00:21:37.590 00:21:37.590 ' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.590 --rc genhtml_branch_coverage=1 00:21:37.590 --rc genhtml_function_coverage=1 00:21:37.590 --rc genhtml_legend=1 00:21:37.590 --rc geninfo_all_blocks=1 00:21:37.590 --rc geninfo_unexecuted_blocks=1 00:21:37.590 00:21:37.590 ' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.590 --rc genhtml_branch_coverage=1 00:21:37.590 --rc genhtml_function_coverage=1 00:21:37.590 --rc genhtml_legend=1 00:21:37.590 --rc geninfo_all_blocks=1 00:21:37.590 --rc geninfo_unexecuted_blocks=1 00:21:37.590 00:21:37.590 ' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:37.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.590 --rc genhtml_branch_coverage=1 00:21:37.590 --rc genhtml_function_coverage=1 00:21:37.590 --rc genhtml_legend=1 00:21:37.590 --rc geninfo_all_blocks=1 00:21:37.590 --rc geninfo_unexecuted_blocks=1 00:21:37.590 00:21:37.590 ' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.590 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:44.180 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:44.180 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:44.180 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:44.180 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:44.180 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:45.566 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:48.111 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.402 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:53.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:53.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:53.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:53.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.403 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:21:53.403 00:21:53.403 --- 10.0.0.2 ping statistics --- 00:21:53.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.403 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:53.403 00:21:53.403 --- 10.0.0.1 ping statistics --- 00:21:53.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.403 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.403 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2993025 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2993025 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2993025 ']' 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.404 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 [2024-11-26 19:12:10.210815] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:21:53.404 [2024-11-26 19:12:10.210887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.404 [2024-11-26 19:12:10.313172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.404 [2024-11-26 19:12:10.369246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.404 [2024-11-26 19:12:10.369302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.404 [2024-11-26 19:12:10.369311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.404 [2024-11-26 19:12:10.369318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.404 [2024-11-26 19:12:10.369324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.404 [2024-11-26 19:12:10.371335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.404 [2024-11-26 19:12:10.371497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.404 [2024-11-26 19:12:10.371658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.404 [2024-11-26 19:12:10.371659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:53.976 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.977 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.977 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.977 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:53.977 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.977 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.297 [2024-11-26 19:12:11.221604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.297 Malloc1 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.297 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:54.298 [2024-11-26 19:12:11.297989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2993553 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:54.298 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:56.297 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:56.298 "tick_rate": 2400000000, 00:21:56.298 "poll_groups": [ 00:21:56.298 { 00:21:56.298 "name": "nvmf_tgt_poll_group_000", 00:21:56.298 "admin_qpairs": 1, 00:21:56.298 "io_qpairs": 1, 00:21:56.298 "current_admin_qpairs": 1, 00:21:56.298 "current_io_qpairs": 1, 00:21:56.298 "pending_bdev_io": 0, 00:21:56.298 "completed_nvme_io": 15413, 00:21:56.298 "transports": [ 00:21:56.298 { 00:21:56.298 "trtype": "TCP" 00:21:56.298 } 00:21:56.298 ] 00:21:56.298 }, 00:21:56.298 { 00:21:56.298 "name": "nvmf_tgt_poll_group_001", 00:21:56.298 "admin_qpairs": 0, 00:21:56.298 "io_qpairs": 1, 00:21:56.298 "current_admin_qpairs": 0, 00:21:56.298 "current_io_qpairs": 1, 00:21:56.298 "pending_bdev_io": 0, 00:21:56.298 "completed_nvme_io": 15740, 00:21:56.298 "transports": [ 00:21:56.298 { 00:21:56.298 "trtype": "TCP" 00:21:56.298 } 00:21:56.298 ] 00:21:56.298 }, 00:21:56.298 { 00:21:56.298 "name": "nvmf_tgt_poll_group_002", 00:21:56.298 "admin_qpairs": 0, 00:21:56.298 "io_qpairs": 1, 00:21:56.298 "current_admin_qpairs": 0, 00:21:56.298 "current_io_qpairs": 1, 00:21:56.298 "pending_bdev_io": 0, 00:21:56.298 "completed_nvme_io": 16664, 00:21:56.298 "transports": [ 00:21:56.298 { 00:21:56.298 "trtype": "TCP" 00:21:56.298 } 00:21:56.298 ] 00:21:56.298 }, 00:21:56.298 { 00:21:56.298 "name": "nvmf_tgt_poll_group_003", 00:21:56.298 "admin_qpairs": 0, 00:21:56.298 "io_qpairs": 1, 00:21:56.298 "current_admin_qpairs": 0, 00:21:56.298 "current_io_qpairs": 1, 00:21:56.298 "pending_bdev_io": 0, 00:21:56.298 "completed_nvme_io": 15621, 00:21:56.298 "transports": [ 00:21:56.298 { 00:21:56.298 "trtype": "TCP" 00:21:56.298 } 00:21:56.298 ] 00:21:56.298 } 00:21:56.298 ] 00:21:56.298 }' 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:56.298 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2993553 00:22:04.428 Initializing NVMe Controllers 00:22:04.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:04.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:04.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:04.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:04.428 Initialization complete. Launching workers. 00:22:04.428 ======================================================== 00:22:04.428 Latency(us) 00:22:04.428 Device Information : IOPS MiB/s Average min max 00:22:04.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11946.70 46.67 5364.18 1227.95 43134.64 00:22:04.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12870.60 50.28 4987.58 1256.09 44851.78 00:22:04.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13648.40 53.31 4689.73 1236.76 13637.32 00:22:04.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12494.40 48.81 5121.62 1267.81 13368.21 00:22:04.428 ======================================================== 00:22:04.428 Total : 50960.08 199.06 5028.96 1227.95 44851.78 00:22:04.428 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.428 rmmod nvme_tcp 00:22:04.428 rmmod nvme_fabrics 00:22:04.428 rmmod nvme_keyring 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2993025 ']' 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2993025 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2993025 ']' 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2993025 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.428 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993025 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993025' 00:22:04.687 killing process with pid 2993025 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2993025 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2993025 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.687 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.688 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.688 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.688 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.688 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.229 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.229 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:07.229 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:07.229 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:08.612 19:12:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:10.527 19:12:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.816 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:22:15.817 00:22:15.817 --- 10.0.0.2 ping statistics --- 00:22:15.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.817 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:22:15.817 00:22:15.817 --- 10.0.0.1 ping statistics --- 00:22:15.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.817 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.817 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:15.818 net.core.busy_poll = 1 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:15.818 net.core.busy_read = 1 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:15.818 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2998046 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2998046 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2998046 ']' 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.078 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.078 [2024-11-26 19:12:33.199472] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:16.078 [2024-11-26 19:12:33.199542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.339 [2024-11-26 19:12:33.302437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.339 [2024-11-26 19:12:33.355809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.339 [2024-11-26 19:12:33.355859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.339 [2024-11-26 19:12:33.355873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.339 [2024-11-26 19:12:33.355880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.339 [2024-11-26 19:12:33.355886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.339 [2024-11-26 19:12:33.358230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.339 [2024-11-26 19:12:33.358450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.339 [2024-11-26 19:12:33.358615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.339 [2024-11-26 19:12:33.358616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.910 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.911 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 [2024-11-26 19:12:34.204066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 Malloc1 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.172 [2024-11-26 19:12:34.278937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2998399 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:17.172 19:12:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:19.085 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:19.085 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.085 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.345 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.345 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:19.345 "tick_rate": 2400000000, 00:22:19.345 "poll_groups": [ 00:22:19.345 { 00:22:19.345 "name": "nvmf_tgt_poll_group_000", 00:22:19.346 "admin_qpairs": 1, 00:22:19.346 "io_qpairs": 3, 00:22:19.346 "current_admin_qpairs": 1, 00:22:19.346 "current_io_qpairs": 3, 00:22:19.346 "pending_bdev_io": 0, 00:22:19.346 "completed_nvme_io": 27346, 00:22:19.346 "transports": [ 00:22:19.346 { 00:22:19.346 "trtype": "TCP" 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": "nvmf_tgt_poll_group_001", 00:22:19.346 "admin_qpairs": 0, 00:22:19.346 "io_qpairs": 1, 00:22:19.346 "current_admin_qpairs": 0, 00:22:19.346 "current_io_qpairs": 1, 00:22:19.346 "pending_bdev_io": 0, 00:22:19.346 "completed_nvme_io": 28295, 00:22:19.346 "transports": [ 00:22:19.346 { 00:22:19.346 "trtype": "TCP" 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": "nvmf_tgt_poll_group_002", 00:22:19.346 "admin_qpairs": 0, 00:22:19.346 "io_qpairs": 0, 00:22:19.346 "current_admin_qpairs": 0, 00:22:19.346 "current_io_qpairs": 0, 00:22:19.346 "pending_bdev_io": 0, 00:22:19.346 "completed_nvme_io": 0, 00:22:19.346 "transports": [ 00:22:19.346 { 00:22:19.346 "trtype": "TCP" 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 }, 00:22:19.346 { 00:22:19.346 "name": "nvmf_tgt_poll_group_003", 00:22:19.346 "admin_qpairs": 0, 00:22:19.346 "io_qpairs": 0, 00:22:19.346 "current_admin_qpairs": 0, 00:22:19.346 "current_io_qpairs": 0, 00:22:19.346 "pending_bdev_io": 0, 00:22:19.346 "completed_nvme_io": 0, 00:22:19.346 "transports": [ 00:22:19.346 { 00:22:19.346 "trtype": "TCP" 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 } 00:22:19.346 ] 00:22:19.346 }' 00:22:19.346 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:19.346 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:19.346 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:19.346 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:19.346 19:12:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2998399 00:22:27.481 Initializing NVMe Controllers 00:22:27.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.481 Initialization complete. Launching workers. 00:22:27.481 ======================================================== 00:22:27.481 Latency(us) 00:22:27.481 Device Information : IOPS MiB/s Average min max 00:22:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7078.60 27.65 9071.97 1404.23 54730.47 00:22:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6666.20 26.04 9603.21 1396.94 61021.42 00:22:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6142.20 23.99 10423.29 1394.79 61550.42 00:22:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 17608.90 68.78 3645.13 981.04 45484.34 00:22:27.481 ======================================================== 00:22:27.481 Total : 37495.90 146.47 6839.21 981.04 61550.42 00:22:27.481 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.481 rmmod nvme_tcp 00:22:27.481 rmmod nvme_fabrics 00:22:27.481 rmmod nvme_keyring 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2998046 ']' 00:22:27.481 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2998046 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2998046 ']' 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2998046 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998046 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998046' 00:22:27.482 killing process with pid 2998046 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2998046 00:22:27.482 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2998046 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.742 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:31.043 00:22:31.043 real 0m54.267s 00:22:31.043 user 2m50.223s 00:22:31.043 sys 0m11.531s 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.043 ************************************ 00:22:31.043 END TEST nvmf_perf_adq 00:22:31.043 ************************************ 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:31.043 ************************************ 00:22:31.043 START TEST nvmf_shutdown 00:22:31.043 ************************************ 00:22:31.043 19:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:31.043 * Looking for test storage... 00:22:31.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:31.043 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.044 --rc genhtml_branch_coverage=1 00:22:31.044 --rc genhtml_function_coverage=1 00:22:31.044 --rc genhtml_legend=1 00:22:31.044 --rc geninfo_all_blocks=1 00:22:31.044 --rc geninfo_unexecuted_blocks=1 00:22:31.044 00:22:31.044 ' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.044 --rc genhtml_branch_coverage=1 00:22:31.044 --rc genhtml_function_coverage=1 00:22:31.044 --rc genhtml_legend=1 00:22:31.044 --rc geninfo_all_blocks=1 00:22:31.044 --rc geninfo_unexecuted_blocks=1 00:22:31.044 00:22:31.044 ' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.044 --rc genhtml_branch_coverage=1 00:22:31.044 --rc genhtml_function_coverage=1 00:22:31.044 --rc genhtml_legend=1 00:22:31.044 --rc geninfo_all_blocks=1 00:22:31.044 --rc geninfo_unexecuted_blocks=1 00:22:31.044 00:22:31.044 ' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.044 --rc genhtml_branch_coverage=1 00:22:31.044 --rc genhtml_function_coverage=1 00:22:31.044 --rc genhtml_legend=1 00:22:31.044 --rc geninfo_all_blocks=1 00:22:31.044 --rc geninfo_unexecuted_blocks=1 00:22:31.044 00:22:31.044 ' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:31.044 ************************************ 00:22:31.044 START TEST nvmf_shutdown_tc1 00:22:31.044 ************************************ 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:31.044 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.045 19:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:39.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:39.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:39.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:39.187 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.187 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:22:39.188 00:22:39.188 --- 10.0.0.2 ping statistics --- 00:22:39.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.188 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:39.188 00:22:39.188 --- 10.0.0.1 ping statistics --- 00:22:39.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.188 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3004862 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3004862 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3004862 ']' 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.188 19:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.188 [2024-11-26 19:12:55.942244] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:39.188 [2024-11-26 19:12:55.942310] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.188 [2024-11-26 19:12:56.044024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.188 [2024-11-26 19:12:56.096069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.188 [2024-11-26 19:12:56.096125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.188 [2024-11-26 19:12:56.096134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.188 [2024-11-26 19:12:56.096141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.188 [2024-11-26 19:12:56.096148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.188 [2024-11-26 19:12:56.098496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.188 [2024-11-26 19:12:56.098630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.188 [2024-11-26 19:12:56.098789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.188 [2024-11-26 19:12:56.098790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.761 [2024-11-26 19:12:56.819779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.761 19:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:39.761 Malloc1 00:22:39.761 [2024-11-26 19:12:56.949750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.022 Malloc2 00:22:40.022 Malloc3 00:22:40.022 Malloc4 00:22:40.022 Malloc5 00:22:40.022 Malloc6 00:22:40.022 Malloc7 00:22:40.284 Malloc8 00:22:40.284 Malloc9 00:22:40.284 Malloc10 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3005240 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3005240 /var/tmp/bdevperf.sock 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3005240 ']' 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.284 "hdgst": ${hdgst:-false}, 00:22:40.284 "ddgst": ${ddgst:-false} 00:22:40.284 }, 00:22:40.284 "method": "bdev_nvme_attach_controller" 00:22:40.284 } 00:22:40.284 EOF 00:22:40.284 )") 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.284 "hdgst": ${hdgst:-false}, 00:22:40.284 "ddgst": ${ddgst:-false} 00:22:40.284 }, 00:22:40.284 "method": "bdev_nvme_attach_controller" 00:22:40.284 } 00:22:40.284 EOF 00:22:40.284 )") 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.284 "hdgst": ${hdgst:-false}, 00:22:40.284 "ddgst": ${ddgst:-false} 00:22:40.284 }, 00:22:40.284 "method": "bdev_nvme_attach_controller" 00:22:40.284 } 00:22:40.284 EOF 00:22:40.284 )") 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.284 "hdgst": ${hdgst:-false}, 00:22:40.284 "ddgst": ${ddgst:-false} 00:22:40.284 }, 00:22:40.284 "method": "bdev_nvme_attach_controller" 00:22:40.284 } 00:22:40.284 EOF 00:22:40.284 )") 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.284 "hdgst": ${hdgst:-false}, 00:22:40.284 "ddgst": ${ddgst:-false} 00:22:40.284 }, 00:22:40.284 "method": "bdev_nvme_attach_controller" 00:22:40.284 } 00:22:40.284 EOF 00:22:40.284 )") 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.284 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.284 { 00:22:40.284 "params": { 00:22:40.284 "name": "Nvme$subsystem", 00:22:40.284 "trtype": "$TEST_TRANSPORT", 00:22:40.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.284 "adrfam": "ipv4", 00:22:40.284 "trsvcid": "$NVMF_PORT", 00:22:40.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.285 "hdgst": ${hdgst:-false}, 00:22:40.285 "ddgst": ${ddgst:-false} 00:22:40.285 }, 00:22:40.285 "method": "bdev_nvme_attach_controller" 00:22:40.285 } 00:22:40.285 EOF 00:22:40.285 )") 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.285 { 00:22:40.285 "params": { 00:22:40.285 "name": "Nvme$subsystem", 00:22:40.285 "trtype": "$TEST_TRANSPORT", 00:22:40.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.285 "adrfam": "ipv4", 00:22:40.285 "trsvcid": "$NVMF_PORT", 00:22:40.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.285 "hdgst": ${hdgst:-false}, 00:22:40.285 "ddgst": ${ddgst:-false} 00:22:40.285 }, 00:22:40.285 "method": "bdev_nvme_attach_controller" 00:22:40.285 } 00:22:40.285 EOF 00:22:40.285 )") 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.285 { 00:22:40.285 "params": { 00:22:40.285 "name": "Nvme$subsystem", 00:22:40.285 "trtype": "$TEST_TRANSPORT", 00:22:40.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.285 "adrfam": "ipv4", 00:22:40.285 "trsvcid": "$NVMF_PORT", 00:22:40.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.285 "hdgst": ${hdgst:-false}, 00:22:40.285 "ddgst": ${ddgst:-false} 00:22:40.285 }, 00:22:40.285 "method": "bdev_nvme_attach_controller" 00:22:40.285 } 00:22:40.285 EOF 00:22:40.285 )") 00:22:40.285 [2024-11-26 19:12:57.475938] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:40.285 [2024-11-26 19:12:57.476011] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.285 { 00:22:40.285 "params": { 00:22:40.285 "name": "Nvme$subsystem", 00:22:40.285 "trtype": "$TEST_TRANSPORT", 00:22:40.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.285 "adrfam": "ipv4", 00:22:40.285 "trsvcid": "$NVMF_PORT", 00:22:40.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.285 "hdgst": ${hdgst:-false}, 00:22:40.285 "ddgst": ${ddgst:-false} 00:22:40.285 }, 00:22:40.285 "method": "bdev_nvme_attach_controller" 00:22:40.285 } 00:22:40.285 EOF 00:22:40.285 )") 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.285 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.285 { 00:22:40.285 "params": { 00:22:40.285 "name": "Nvme$subsystem", 00:22:40.285 "trtype": "$TEST_TRANSPORT", 00:22:40.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.285 "adrfam": "ipv4", 00:22:40.285 "trsvcid": "$NVMF_PORT", 00:22:40.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.285 "hdgst": ${hdgst:-false}, 00:22:40.285 "ddgst": ${ddgst:-false} 00:22:40.285 }, 00:22:40.285 "method": "bdev_nvme_attach_controller" 00:22:40.285 } 00:22:40.285 EOF 00:22:40.285 )") 00:22:40.546 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:40.546 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:40.546 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:40.546 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.546 "params": { 00:22:40.546 "name": "Nvme1", 00:22:40.546 "trtype": "tcp", 00:22:40.546 "traddr": "10.0.0.2", 00:22:40.546 "adrfam": "ipv4", 00:22:40.546 "trsvcid": "4420", 00:22:40.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.546 "hdgst": false, 00:22:40.546 "ddgst": false 00:22:40.546 }, 00:22:40.546 "method": "bdev_nvme_attach_controller" 00:22:40.546 },{ 00:22:40.546 "params": { 00:22:40.546 "name": "Nvme2", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme3", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme4", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme5", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme6", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme7", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme8", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme9", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 },{ 00:22:40.547 "params": { 00:22:40.547 "name": "Nvme10", 00:22:40.547 "trtype": "tcp", 00:22:40.547 "traddr": "10.0.0.2", 00:22:40.547 "adrfam": "ipv4", 00:22:40.547 "trsvcid": "4420", 00:22:40.547 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.547 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.547 "hdgst": false, 00:22:40.547 "ddgst": false 00:22:40.547 }, 00:22:40.547 "method": "bdev_nvme_attach_controller" 00:22:40.547 }' 00:22:40.547 [2024-11-26 19:12:57.572213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.547 [2024-11-26 19:12:57.626131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.932 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.932 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3005240 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:41.933 19:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:42.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3005240 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3004862 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.885 { 00:22:42.885 "params": { 00:22:42.885 "name": "Nvme$subsystem", 00:22:42.885 "trtype": "$TEST_TRANSPORT", 00:22:42.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.885 "adrfam": "ipv4", 00:22:42.885 "trsvcid": "$NVMF_PORT", 00:22:42.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.885 "hdgst": ${hdgst:-false}, 00:22:42.885 "ddgst": ${ddgst:-false} 00:22:42.885 }, 00:22:42.885 "method": "bdev_nvme_attach_controller" 00:22:42.885 } 00:22:42.885 EOF 00:22:42.885 )") 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.885 { 00:22:42.885 "params": { 00:22:42.885 "name": "Nvme$subsystem", 00:22:42.885 "trtype": "$TEST_TRANSPORT", 00:22:42.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.885 "adrfam": "ipv4", 00:22:42.885 "trsvcid": "$NVMF_PORT", 00:22:42.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.885 "hdgst": ${hdgst:-false}, 00:22:42.885 "ddgst": ${ddgst:-false} 00:22:42.885 }, 00:22:42.885 "method": "bdev_nvme_attach_controller" 00:22:42.885 } 00:22:42.885 EOF 00:22:42.885 )") 00:22:42.885 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 [2024-11-26 19:12:59.943048] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:42.886 [2024-11-26 19:12:59.943104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005619 ] 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.886 { 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme$subsystem", 00:22:42.886 "trtype": "$TEST_TRANSPORT", 00:22:42.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "$NVMF_PORT", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.886 "hdgst": ${hdgst:-false}, 00:22:42.886 "ddgst": ${ddgst:-false} 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 } 00:22:42.886 EOF 00:22:42.886 )") 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:42.886 19:12:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme1", 00:22:42.886 "trtype": "tcp", 00:22:42.886 "traddr": "10.0.0.2", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "4420", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.886 "hdgst": false, 00:22:42.886 "ddgst": false 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 },{ 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme2", 00:22:42.886 "trtype": "tcp", 00:22:42.886 "traddr": "10.0.0.2", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "4420", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.886 "hdgst": false, 00:22:42.886 "ddgst": false 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 },{ 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme3", 00:22:42.886 "trtype": "tcp", 00:22:42.886 "traddr": "10.0.0.2", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "4420", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:42.886 "hdgst": false, 00:22:42.886 "ddgst": false 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 },{ 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme4", 00:22:42.886 "trtype": "tcp", 00:22:42.886 "traddr": "10.0.0.2", 00:22:42.886 "adrfam": "ipv4", 00:22:42.886 "trsvcid": "4420", 00:22:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.886 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:42.886 "hdgst": false, 00:22:42.886 "ddgst": false 00:22:42.886 }, 00:22:42.886 "method": "bdev_nvme_attach_controller" 00:22:42.886 },{ 00:22:42.886 "params": { 00:22:42.886 "name": "Nvme5", 00:22:42.886 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 },{ 00:22:42.887 "params": { 00:22:42.887 "name": "Nvme6", 00:22:42.887 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 },{ 00:22:42.887 "params": { 00:22:42.887 "name": "Nvme7", 00:22:42.887 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 },{ 00:22:42.887 "params": { 00:22:42.887 "name": "Nvme8", 00:22:42.887 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 },{ 00:22:42.887 "params": { 00:22:42.887 "name": "Nvme9", 00:22:42.887 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 },{ 00:22:42.887 "params": { 00:22:42.887 "name": "Nvme10", 00:22:42.887 "trtype": "tcp", 00:22:42.887 "traddr": "10.0.0.2", 00:22:42.887 "adrfam": "ipv4", 00:22:42.887 "trsvcid": "4420", 00:22:42.887 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:42.887 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:42.887 "hdgst": false, 00:22:42.887 "ddgst": false 00:22:42.887 }, 00:22:42.887 "method": "bdev_nvme_attach_controller" 00:22:42.887 }' 00:22:42.887 [2024-11-26 19:13:00.038652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.887 [2024-11-26 19:13:00.074853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.272 Running I/O for 1 seconds... 00:22:45.476 1861.00 IOPS, 116.31 MiB/s 00:22:45.476 Latency(us) 00:22:45.476 [2024-11-26T18:13:02.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.476 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme1n1 : 1.11 234.86 14.68 0.00 0.00 269039.59 3249.49 235929.60 00:22:45.476 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme2n1 : 1.12 228.30 14.27 0.00 0.00 272550.61 15837.87 248162.99 00:22:45.476 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme3n1 : 1.05 244.06 15.25 0.00 0.00 249993.17 17148.59 242920.11 00:22:45.476 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme4n1 : 1.08 237.38 14.84 0.00 0.00 252623.36 19551.57 246415.36 00:22:45.476 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme5n1 : 1.12 229.04 14.32 0.00 0.00 257720.75 19223.89 249910.61 00:22:45.476 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme6n1 : 1.11 231.66 14.48 0.00 0.00 249879.68 18131.63 246415.36 00:22:45.476 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme7n1 : 1.17 272.83 17.05 0.00 0.00 209622.27 11960.32 244667.73 00:22:45.476 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme8n1 : 1.18 273.27 17.08 0.00 0.00 205459.07 1140.05 232434.35 00:22:45.476 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme9n1 : 1.19 268.65 16.79 0.00 0.00 205614.68 12943.36 244667.73 00:22:45.476 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.476 Verification LBA range: start 0x0 length 0x400 00:22:45.476 Nvme10n1 : 1.20 267.73 16.73 0.00 0.00 202626.90 10594.99 265639.25 00:22:45.476 [2024-11-26T18:13:02.689Z] =================================================================================================================== 00:22:45.476 [2024-11-26T18:13:02.689Z] Total : 2487.79 155.49 0.00 0.00 234662.73 1140.05 265639.25 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.476 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.476 rmmod nvme_tcp 00:22:45.737 rmmod nvme_fabrics 00:22:45.737 rmmod nvme_keyring 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3004862 ']' 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3004862 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3004862 ']' 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3004862 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004862 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004862' 00:22:45.737 killing process with pid 3004862 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3004862 00:22:45.737 19:13:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3004862 00:22:45.997 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.997 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.997 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.997 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.998 19:13:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.912 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.912 00:22:47.912 real 0m16.888s 00:22:47.912 user 0m33.650s 00:22:47.912 sys 0m7.019s 00:22:47.912 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.912 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.912 ************************************ 00:22:47.912 END TEST nvmf_shutdown_tc1 00:22:47.912 ************************************ 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:48.173 ************************************ 00:22:48.173 START TEST nvmf_shutdown_tc2 00:22:48.173 ************************************ 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.173 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:48.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:48.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:48.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:48.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.174 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:22:48.436 00:22:48.436 --- 10.0.0.2 ping statistics --- 00:22:48.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.436 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:22:48.436 00:22:48.436 --- 10.0.0.1 ping statistics --- 00:22:48.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.436 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3006794 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3006794 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3006794 ']' 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.436 19:13:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.436 [2024-11-26 19:13:05.634672] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:48.436 [2024-11-26 19:13:05.634739] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.696 [2024-11-26 19:13:05.733011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.696 [2024-11-26 19:13:05.767800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.696 [2024-11-26 19:13:05.767831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.696 [2024-11-26 19:13:05.767837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.696 [2024-11-26 19:13:05.767843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.696 [2024-11-26 19:13:05.767847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.696 [2024-11-26 19:13:05.769423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.696 [2024-11-26 19:13:05.769574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.697 [2024-11-26 19:13:05.769727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.697 [2024-11-26 19:13:05.769729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:49.267 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.267 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:49.267 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.267 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.267 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 [2024-11-26 19:13:06.493350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.527 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 Malloc1 00:22:49.527 [2024-11-26 19:13:06.599308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.527 Malloc2 00:22:49.527 Malloc3 00:22:49.527 Malloc4 00:22:49.527 Malloc5 00:22:49.786 Malloc6 00:22:49.786 Malloc7 00:22:49.786 Malloc8 00:22:49.786 Malloc9 00:22:49.786 Malloc10 00:22:49.787 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.787 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:49.787 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.787 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.787 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3007118 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3007118 /var/tmp/bdevperf.sock 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3007118 ']' 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 [2024-11-26 19:13:07.044547] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:50.048 [2024-11-26 19:13:07.044601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007118 ] 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.048 )") 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.048 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.048 { 00:22:50.048 "params": { 00:22:50.048 "name": "Nvme$subsystem", 00:22:50.048 "trtype": "$TEST_TRANSPORT", 00:22:50.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.048 "adrfam": "ipv4", 00:22:50.048 "trsvcid": "$NVMF_PORT", 00:22:50.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.048 "hdgst": ${hdgst:-false}, 00:22:50.048 "ddgst": ${ddgst:-false} 00:22:50.048 }, 00:22:50.048 "method": "bdev_nvme_attach_controller" 00:22:50.048 } 00:22:50.048 EOF 00:22:50.049 )") 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.049 { 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme$subsystem", 00:22:50.049 "trtype": "$TEST_TRANSPORT", 00:22:50.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "$NVMF_PORT", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.049 "hdgst": ${hdgst:-false}, 00:22:50.049 "ddgst": ${ddgst:-false} 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 } 00:22:50.049 EOF 00:22:50.049 )") 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:50.049 { 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme$subsystem", 00:22:50.049 "trtype": "$TEST_TRANSPORT", 00:22:50.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "$NVMF_PORT", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.049 "hdgst": ${hdgst:-false}, 00:22:50.049 "ddgst": ${ddgst:-false} 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 } 00:22:50.049 EOF 00:22:50.049 )") 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:50.049 19:13:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme1", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme2", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme3", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme4", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme5", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme6", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme7", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme8", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme9", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 },{ 00:22:50.049 "params": { 00:22:50.049 "name": "Nvme10", 00:22:50.049 "trtype": "tcp", 00:22:50.049 "traddr": "10.0.0.2", 00:22:50.049 "adrfam": "ipv4", 00:22:50.049 "trsvcid": "4420", 00:22:50.049 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.049 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.049 "hdgst": false, 00:22:50.049 "ddgst": false 00:22:50.049 }, 00:22:50.049 "method": "bdev_nvme_attach_controller" 00:22:50.049 }' 00:22:50.049 [2024-11-26 19:13:07.135002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.049 [2024-11-26 19:13:07.171596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.427 Running I/O for 10 seconds... 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:51.688 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.948 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.207 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.207 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:52.207 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:52.207 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3007118 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3007118 ']' 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3007118 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007118 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007118' 00:22:52.468 killing process with pid 3007118 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3007118 00:22:52.468 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3007118 00:22:52.468 Received shutdown signal, test time was about 0.977007 seconds 00:22:52.468 00:22:52.468 Latency(us) 00:22:52.468 [2024-11-26T18:13:09.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.468 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme1n1 : 0.95 201.65 12.60 0.00 0.00 313774.08 19879.25 251658.24 00:22:52.468 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme2n1 : 0.96 266.31 16.64 0.00 0.00 232836.05 19005.44 246415.36 00:22:52.468 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme3n1 : 0.97 263.59 16.47 0.00 0.00 230445.01 24248.32 246415.36 00:22:52.468 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme4n1 : 0.96 267.04 16.69 0.00 0.00 222659.41 17257.81 248162.99 00:22:52.468 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme5n1 : 0.94 208.48 13.03 0.00 0.00 276656.80 3659.09 251658.24 00:22:52.468 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme6n1 : 0.97 264.26 16.52 0.00 0.00 215349.12 14199.47 249910.61 00:22:52.468 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme7n1 : 0.98 262.26 16.39 0.00 0.00 212266.03 13325.65 255153.49 00:22:52.468 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme8n1 : 0.97 264.53 16.53 0.00 0.00 205738.24 18896.21 244667.73 00:22:52.468 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme9n1 : 0.96 200.74 12.55 0.00 0.00 264622.36 19223.89 274377.39 00:22:52.468 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:52.468 Verification LBA range: start 0x0 length 0x400 00:22:52.468 Nvme10n1 : 0.94 203.88 12.74 0.00 0.00 253471.86 18459.31 228939.09 00:22:52.468 [2024-11-26T18:13:09.681Z] =================================================================================================================== 00:22:52.468 [2024-11-26T18:13:09.681Z] Total : 2402.75 150.17 0.00 0.00 239046.92 3659.09 274377.39 00:22:52.728 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3006794 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:53.667 rmmod nvme_tcp 00:22:53.667 rmmod nvme_fabrics 00:22:53.667 rmmod nvme_keyring 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:53.667 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3006794 ']' 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3006794 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3006794 ']' 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3006794 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.668 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006794 00:22:53.928 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.928 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.928 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006794' 00:22:53.928 killing process with pid 3006794 00:22:53.928 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3006794 00:22:53.928 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3006794 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.928 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.471 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.471 00:22:56.471 real 0m7.998s 00:22:56.471 user 0m24.286s 00:22:56.471 sys 0m1.332s 00:22:56.471 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.471 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.472 ************************************ 00:22:56.472 END TEST nvmf_shutdown_tc2 00:22:56.472 ************************************ 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.472 ************************************ 00:22:56.472 START TEST nvmf_shutdown_tc3 00:22:56.472 ************************************ 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:56.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:56.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:56.472 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:56.473 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:56.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:22:56.473 00:22:56.473 --- 10.0.0.2 ping statistics --- 00:22:56.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.473 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:22:56.473 00:22:56.473 --- 10.0.0.1 ping statistics --- 00:22:56.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.473 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:56.473 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3008580 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3008580 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3008580 ']' 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.474 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.734 [2024-11-26 19:13:13.719622] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:56.734 [2024-11-26 19:13:13.719688] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.734 [2024-11-26 19:13:13.814389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.734 [2024-11-26 19:13:13.848492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.734 [2024-11-26 19:13:13.848522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.734 [2024-11-26 19:13:13.848528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.734 [2024-11-26 19:13:13.848533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.734 [2024-11-26 19:13:13.848537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.734 [2024-11-26 19:13:13.850105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.734 [2024-11-26 19:13:13.850264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.734 [2024-11-26 19:13:13.850609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.734 [2024-11-26 19:13:13.850609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.422 [2024-11-26 19:13:14.566299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.422 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.705 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.705 Malloc1 00:22:57.705 [2024-11-26 19:13:14.681806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.705 Malloc2 00:22:57.705 Malloc3 00:22:57.705 Malloc4 00:22:57.705 Malloc5 00:22:57.705 Malloc6 00:22:57.705 Malloc7 00:22:57.966 Malloc8 00:22:57.966 Malloc9 00:22:57.966 Malloc10 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3008970 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3008970 /var/tmp/bdevperf.sock 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3008970 ']' 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:57.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.966 { 00:22:57.966 "params": { 00:22:57.966 "name": "Nvme$subsystem", 00:22:57.966 "trtype": "$TEST_TRANSPORT", 00:22:57.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.966 "adrfam": "ipv4", 00:22:57.966 "trsvcid": "$NVMF_PORT", 00:22:57.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.966 "hdgst": ${hdgst:-false}, 00:22:57.966 "ddgst": ${ddgst:-false} 00:22:57.966 }, 00:22:57.966 "method": "bdev_nvme_attach_controller" 00:22:57.966 } 00:22:57.966 EOF 00:22:57.966 )") 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.966 { 00:22:57.966 "params": { 00:22:57.966 "name": "Nvme$subsystem", 00:22:57.966 "trtype": "$TEST_TRANSPORT", 00:22:57.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.966 "adrfam": "ipv4", 00:22:57.966 "trsvcid": "$NVMF_PORT", 00:22:57.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.966 "hdgst": ${hdgst:-false}, 00:22:57.966 "ddgst": ${ddgst:-false} 00:22:57.966 }, 00:22:57.966 "method": "bdev_nvme_attach_controller" 00:22:57.966 } 00:22:57.966 EOF 00:22:57.966 )") 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.966 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.966 { 00:22:57.966 "params": { 00:22:57.966 "name": "Nvme$subsystem", 00:22:57.966 "trtype": "$TEST_TRANSPORT", 00:22:57.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.966 "adrfam": "ipv4", 00:22:57.966 "trsvcid": "$NVMF_PORT", 00:22:57.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.966 "hdgst": ${hdgst:-false}, 00:22:57.966 "ddgst": ${ddgst:-false} 00:22:57.966 }, 00:22:57.966 "method": "bdev_nvme_attach_controller" 00:22:57.966 } 00:22:57.966 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 [2024-11-26 19:13:15.130038] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:22:57.967 [2024-11-26 19:13:15.130093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008970 ] 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:57.967 { 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme$subsystem", 00:22:57.967 "trtype": "$TEST_TRANSPORT", 00:22:57.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "$NVMF_PORT", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.967 "hdgst": ${hdgst:-false}, 00:22:57.967 "ddgst": ${ddgst:-false} 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 } 00:22:57.967 EOF 00:22:57.967 )") 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:57.967 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme1", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 },{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme2", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 },{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme3", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 },{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme4", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 },{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme5", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.967 }, 00:22:57.967 "method": "bdev_nvme_attach_controller" 00:22:57.967 },{ 00:22:57.967 "params": { 00:22:57.967 "name": "Nvme6", 00:22:57.967 "trtype": "tcp", 00:22:57.967 "traddr": "10.0.0.2", 00:22:57.967 "adrfam": "ipv4", 00:22:57.967 "trsvcid": "4420", 00:22:57.967 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:57.967 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:57.967 "hdgst": false, 00:22:57.967 "ddgst": false 00:22:57.968 }, 00:22:57.968 "method": "bdev_nvme_attach_controller" 00:22:57.968 },{ 00:22:57.968 "params": { 00:22:57.968 "name": "Nvme7", 00:22:57.968 "trtype": "tcp", 00:22:57.968 "traddr": "10.0.0.2", 00:22:57.968 "adrfam": "ipv4", 00:22:57.968 "trsvcid": "4420", 00:22:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:57.968 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:57.968 "hdgst": false, 00:22:57.968 "ddgst": false 00:22:57.968 }, 00:22:57.968 "method": "bdev_nvme_attach_controller" 00:22:57.968 },{ 00:22:57.968 "params": { 00:22:57.968 "name": "Nvme8", 00:22:57.968 "trtype": "tcp", 00:22:57.968 "traddr": "10.0.0.2", 00:22:57.968 "adrfam": "ipv4", 00:22:57.968 "trsvcid": "4420", 00:22:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:57.968 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:57.968 "hdgst": false, 00:22:57.968 "ddgst": false 00:22:57.968 }, 00:22:57.968 "method": "bdev_nvme_attach_controller" 00:22:57.968 },{ 00:22:57.968 "params": { 00:22:57.968 "name": "Nvme9", 00:22:57.968 "trtype": "tcp", 00:22:57.968 "traddr": "10.0.0.2", 00:22:57.968 "adrfam": "ipv4", 00:22:57.968 "trsvcid": "4420", 00:22:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:57.968 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:57.968 "hdgst": false, 00:22:57.968 "ddgst": false 00:22:57.968 }, 00:22:57.968 "method": "bdev_nvme_attach_controller" 00:22:57.968 },{ 00:22:57.968 "params": { 00:22:57.968 "name": "Nvme10", 00:22:57.968 "trtype": "tcp", 00:22:57.968 "traddr": "10.0.0.2", 00:22:57.968 "adrfam": "ipv4", 00:22:57.968 "trsvcid": "4420", 00:22:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:57.968 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:57.968 "hdgst": false, 00:22:57.968 "ddgst": false 00:22:57.968 }, 00:22:57.968 "method": "bdev_nvme_attach_controller" 00:22:57.968 }' 00:22:58.228 [2024-11-26 19:13:15.220430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.228 [2024-11-26 19:13:15.256731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.140 Running I/O for 10 seconds... 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:00.711 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.987 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=150 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 150 -ge 100 ']' 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3008580 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3008580 ']' 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3008580 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3008580 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3008580' 00:23:00.987 killing process with pid 3008580 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3008580 00:23:00.987 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3008580 00:23:00.987 [2024-11-26 19:13:18.097203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.987 [2024-11-26 19:13:18.097358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.097564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3810 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.098614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed19f0 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.098642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed19f0 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.988 [2024-11-26 19:13:18.099859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.099998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.100027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea3d00 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.989 [2024-11-26 19:13:18.101529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.101761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea41d0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.102999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.990 [2024-11-26 19:13:18.103027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea46c0 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.991 [2024-11-26 19:13:18.103905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.103996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.104120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4b90 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.992 [2024-11-26 19:13:18.105124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.105348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5060 is same with the state(6) to be set 00:23:00.993 [2024-11-26 19:13:18.106265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.994 [2024-11-26 19:13:18.106563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.106567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5530 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.107292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 1800.00 IOPS, 112.50 MiB/s [2024-11-26T18:13:18.208Z] [2024-11-26 19:13:18.115887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-26 19:13:18.115943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:00.995 the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.995 [2024-11-26 19:13:18.115963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.995 [2024-11-26 19:13:18.115970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.115974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.115978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:13:18.115979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.115987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.115989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.115992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5a20 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18cc0 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.116058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144360 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.116144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c720 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.116242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30610 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.116329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171210 is same with the state(6) to be set 00:23:00.996 [2024-11-26 19:13:18.116414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.996 [2024-11-26 19:13:18.116474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.996 [2024-11-26 19:13:18.116481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18850 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with [2024-11-26 19:13:18.116523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:23:00.997 id:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:13:18.116550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:13:18.116571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16fc0 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:13:18.116650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with [2024-11-26 19:13:18.116668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:00.997 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144c30 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with [2024-11-26 19:13:18.116710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:00.997 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.997 [2024-11-26 19:13:18.116717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.997 [2024-11-26 19:13:18.116722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.997 [2024-11-26 19:13:18.116728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.116733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-26 19:13:18.116738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:00.998 the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.116750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.998 [2024-11-26 19:13:18.116761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 19:13:18.116766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180ed0 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.116820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed1520 is same with the state(6) to be set 00:23:00.998 [2024-11-26 19:13:18.117169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.998 [2024-11-26 19:13:18.117600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.998 [2024-11-26 19:13:18.117608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.117989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.117996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.999 [2024-11-26 19:13:18.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.999 [2024-11-26 19:13:18.118198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.118206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.118215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.118223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.118233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.118240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.118250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.118257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.118266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.118274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.120248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.000 [2024-11-26 19:13:18.120280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180ed0 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.120331] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120370] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120406] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120442] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120476] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120513] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.120552] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.121016] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.121057] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:01.000 [2024-11-26 19:13:18.121456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.000 [2024-11-26 19:13:18.121495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2180ed0 with addr=10.0.0.2, port=4420 00:23:01.000 [2024-11-26 19:13:18.121507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180ed0 is same with the state(6) to be set 00:23:01.000 [2024-11-26 19:13:18.121598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180ed0 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.121648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.000 [2024-11-26 19:13:18.121657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.000 [2024-11-26 19:13:18.121666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.000 [2024-11-26 19:13:18.121676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.000 [2024-11-26 19:13:18.125919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18cc0 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.125962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.000 [2024-11-26 19:13:18.125973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.125983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.000 [2024-11-26 19:13:18.125991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.126000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.000 [2024-11-26 19:13:18.126007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.126015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.000 [2024-11-26 19:13:18.126023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.126030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a330 is same with the state(6) to be set 00:23:01.000 [2024-11-26 19:13:18.126051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144360 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c720 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30610 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171210 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18850 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16fc0 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.126156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144c30 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.131053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.000 [2024-11-26 19:13:18.131457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.000 [2024-11-26 19:13:18.131474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2180ed0 with addr=10.0.0.2, port=4420 00:23:01.000 [2024-11-26 19:13:18.131481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180ed0 is same with the state(6) to be set 00:23:01.000 [2024-11-26 19:13:18.131527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180ed0 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.131572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.000 [2024-11-26 19:13:18.131580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.000 [2024-11-26 19:13:18.131588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.000 [2024-11-26 19:13:18.131595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.000 [2024-11-26 19:13:18.135962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216a330 (9): Bad file descriptor 00:23:01.000 [2024-11-26 19:13:18.136133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.000 [2024-11-26 19:13:18.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.000 [2024-11-26 19:13:18.136405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.136983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.136990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.001 [2024-11-26 19:13:18.137092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.001 [2024-11-26 19:13:18.137102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.137292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.137303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1cc90 is same with the state(6) to be set 00:23:01.002 [2024-11-26 19:13:18.138594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.138985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.138995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.002 [2024-11-26 19:13:18.139110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.002 [2024-11-26 19:13:18.139118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.139692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.139700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1dcc0 is same with the state(6) to be set 00:23:01.003 [2024-11-26 19:13:18.140976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.003 [2024-11-26 19:13:18.140989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.003 [2024-11-26 19:13:18.141003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.004 [2024-11-26 19:13:18.141689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.004 [2024-11-26 19:13:18.141698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.141990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.141998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.142123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.142131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ed1a0 is same with the state(6) to be set 00:23:01.005 [2024-11-26 19:13:18.143414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.005 [2024-11-26 19:13:18.143664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.005 [2024-11-26 19:13:18.143672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.143988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.143998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.006 [2024-11-26 19:13:18.144359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.006 [2024-11-26 19:13:18.144368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.144546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.144554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211a0a0 is same with the state(6) to be set 00:23:01.007 [2024-11-26 19:13:18.145823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.145990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.007 [2024-11-26 19:13:18.146346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.007 [2024-11-26 19:13:18.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.008 [2024-11-26 19:13:18.146968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.008 [2024-11-26 19:13:18.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.146988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.146997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b2d0 is same with the state(6) to be set 00:23:01.009 [2024-11-26 19:13:18.148272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.009 [2024-11-26 19:13:18.148964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.009 [2024-11-26 19:13:18.148972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.148981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.148988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.148998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.149423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.149432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211c520 is same with the state(6) to be set 00:23:01.010 [2024-11-26 19:13:18.150711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.010 [2024-11-26 19:13:18.150957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.010 [2024-11-26 19:13:18.150967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.150975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.150984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.150992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.151494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.151504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.155998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.011 [2024-11-26 19:13:18.156201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.011 [2024-11-26 19:13:18.156211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.156421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211d7e0 is same with the state(6) to be set 00:23:01.012 [2024-11-26 19:13:18.157743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.157995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.012 [2024-11-26 19:13:18.158168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.012 [2024-11-26 19:13:18.158178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.013 [2024-11-26 19:13:18.158861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.013 [2024-11-26 19:13:18.158869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.158879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.158887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.158896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211eaf0 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.160176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160298] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:01.014 [2024-11-26 19:13:18.160315] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:01.014 [2024-11-26 19:13:18.160327] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:01.014 [2024-11-26 19:13:18.160339] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:01.014 [2024-11-26 19:13:18.160356] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:01.014 [2024-11-26 19:13:18.160462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.160760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.160779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d18cc0 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.160792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18cc0 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.161119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.161131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d18850 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.161139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18850 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.161509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16fc0 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.161521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16fc0 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.163666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:01.014 [2024-11-26 19:13:18.164005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.164021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144c30 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.164030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144c30 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.164506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.164545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144360 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.164556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144360 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.164761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.164773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c720 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.164781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c720 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.164986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.014 [2024-11-26 19:13:18.164997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c30610 with addr=10.0.0.2, port=4420 00:23:01.014 [2024-11-26 19:13:18.165006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30610 is same with the state(6) to be set 00:23:01.014 [2024-11-26 19:13:18.165020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18cc0 (9): Bad file descriptor 00:23:01.014 [2024-11-26 19:13:18.165031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18850 (9): Bad file descriptor 00:23:01.014 [2024-11-26 19:13:18.165042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16fc0 (9): Bad file descriptor 00:23:01.014 [2024-11-26 19:13:18.165155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.014 [2024-11-26 19:13:18.165545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.014 [2024-11-26 19:13:18.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.165990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.165998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.015 [2024-11-26 19:13:18.166252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.015 [2024-11-26 19:13:18.166262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.016 [2024-11-26 19:13:18.166270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.016 [2024-11-26 19:13:18.166280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.016 [2024-11-26 19:13:18.166287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.016 [2024-11-26 19:13:18.166296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.016 [2024-11-26 19:13:18.166305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.016 [2024-11-26 19:13:18.166315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.016 [2024-11-26 19:13:18.166323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.016 [2024-11-26 19:13:18.166331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211fdb0 is same with the state(6) to be set 00:23:01.016 [2024-11-26 19:13:18.167868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:01.279 task offset: 31872 on job bdev=Nvme10n1 fails 00:23:01.279 00:23:01.279 Latency(us) 00:23:01.279 [2024-11-26T18:13:18.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme1n1 ended in about 1.04 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme1n1 : 1.04 189.23 11.83 61.47 0.00 252583.32 16602.45 232434.35 00:23:01.279 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme2n1 ended in about 1.04 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme2n1 : 1.04 184.00 11.50 61.33 0.00 253281.07 21517.65 276125.01 00:23:01.279 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme3n1 ended in about 1.05 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme3n1 : 1.05 186.44 11.65 61.19 0.00 246110.45 16602.45 244667.73 00:23:01.279 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme4n1 ended in about 1.05 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme4n1 : 1.05 183.15 11.45 61.05 0.00 244792.32 24903.68 235929.60 00:23:01.279 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme5n1 ended in about 1.05 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme5n1 : 1.05 121.82 7.61 60.91 0.00 320830.58 18131.63 291853.65 00:23:01.279 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme6n1 ended in about 1.05 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme6n1 : 1.05 182.31 11.39 60.77 0.00 236323.41 20316.16 298844.16 00:23:01.279 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme7n1 ended in about 1.06 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme7n1 : 1.06 181.10 11.32 60.37 0.00 233210.24 19442.35 249910.61 00:23:01.279 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme8n1 ended in about 1.06 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme8n1 : 1.06 180.69 11.29 60.23 0.00 229006.08 20206.93 253405.87 00:23:01.279 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme9n1 ended in about 1.07 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme9n1 : 1.07 119.62 7.48 59.81 0.00 301392.21 18568.53 272629.76 00:23:01.279 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:01.279 Job: Nvme10n1 ended in about 1.02 seconds with error 00:23:01.279 Verification LBA range: start 0x0 length 0x400 00:23:01.279 Nvme10n1 : 1.02 187.78 11.74 62.59 0.00 209325.15 3085.65 260396.37 00:23:01.279 [2024-11-26T18:13:18.492Z] =================================================================================================================== 00:23:01.279 [2024-11-26T18:13:18.492Z] Total : 1716.15 107.26 609.73 0.00 249612.22 3085.65 298844.16 00:23:01.279 [2024-11-26 19:13:18.191920] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:01.279 [2024-11-26 19:13:18.191971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:01.279 [2024-11-26 19:13:18.192336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.279 [2024-11-26 19:13:18.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2171210 with addr=10.0.0.2, port=4420 00:23:01.279 [2024-11-26 19:13:18.192368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171210 is same with the state(6) to be set 00:23:01.279 [2024-11-26 19:13:18.192383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144c30 (9): Bad file descriptor 00:23:01.279 [2024-11-26 19:13:18.192395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144360 (9): Bad file descriptor 00:23:01.279 [2024-11-26 19:13:18.192412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c720 (9): Bad file descriptor 00:23:01.279 [2024-11-26 19:13:18.192422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30610 (9): Bad file descriptor 00:23:01.279 [2024-11-26 19:13:18.192432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.279 [2024-11-26 19:13:18.192439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.279 [2024-11-26 19:13:18.192448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.279 [2024-11-26 19:13:18.192459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.279 [2024-11-26 19:13:18.192468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.279 [2024-11-26 19:13:18.192475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.279 [2024-11-26 19:13:18.192482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.279 [2024-11-26 19:13:18.192489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.279 [2024-11-26 19:13:18.192497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:01.279 [2024-11-26 19:13:18.192504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:01.279 [2024-11-26 19:13:18.192511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:01.279 [2024-11-26 19:13:18.192518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.192942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.192957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2180ed0 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.192964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180ed0 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.193280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.193291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216a330 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.193298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216a330 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.193307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2171210 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.193316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.193323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.193330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.193338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.193345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.193352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.193359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.193366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.193376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.193382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.193389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.193396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.193403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.193410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.193417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.193424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.193489] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:01.280 [2024-11-26 19:13:18.193823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2180ed0 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.193837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216a330 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.193846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.193853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.193860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.193867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.193911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.193968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:01.280 [2024-11-26 19:13:18.194018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.194025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.194032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.194038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.194045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.194052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.194059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.194069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.194422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.194436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16fc0 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.194444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16fc0 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.194767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.194777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d18850 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.194786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18850 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.195091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.195101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d18cc0 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.195109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18cc0 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.195442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.195452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c30610 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.195461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30610 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.195750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.195760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c720 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.195767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c720 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.195979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144360 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.195988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144360 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.196168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.280 [2024-11-26 19:13:18.196179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144c30 with addr=10.0.0.2, port=4420 00:23:01.280 [2024-11-26 19:13:18.196187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144c30 is same with the state(6) to be set 00:23:01.280 [2024-11-26 19:13:18.196217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16fc0 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18850 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d18cc0 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30610 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c720 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144360 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144c30 (9): Bad file descriptor 00:23:01.280 [2024-11-26 19:13:18.196306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.196313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.196321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.196328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:01.280 [2024-11-26 19:13:18.196335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:01.280 [2024-11-26 19:13:18.196343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:01.280 [2024-11-26 19:13:18.196350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:01.280 [2024-11-26 19:13:18.196357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:01.281 [2024-11-26 19:13:18.196364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:01.281 [2024-11-26 19:13:18.196372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:01.281 [2024-11-26 19:13:18.196378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:01.281 [2024-11-26 19:13:18.196385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.281 [2024-11-26 19:13:18.196393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:01.281 [2024-11-26 19:13:18.196399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:01.281 [2024-11-26 19:13:18.196406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:01.281 [2024-11-26 19:13:18.196413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:01.281 [2024-11-26 19:13:18.196421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:01.281 [2024-11-26 19:13:18.196427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:01.281 [2024-11-26 19:13:18.196434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:01.281 [2024-11-26 19:13:18.196440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:01.281 [2024-11-26 19:13:18.196447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:01.281 [2024-11-26 19:13:18.196453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:01.281 [2024-11-26 19:13:18.196460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:01.281 [2024-11-26 19:13:18.196467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:01.281 [2024-11-26 19:13:18.196475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:01.281 [2024-11-26 19:13:18.196481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:01.281 [2024-11-26 19:13:18.196488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:01.281 [2024-11-26 19:13:18.196495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:01.281 19:13:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3008970 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3008970 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3008970 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.222 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.222 rmmod nvme_tcp 00:23:02.222 rmmod nvme_fabrics 00:23:02.222 rmmod nvme_keyring 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3008580 ']' 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3008580 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3008580 ']' 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3008580 00:23:02.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3008580) - No such process 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3008580 is not found' 00:23:02.483 Process with pid 3008580 is not found 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.483 19:13:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.395 00:23:04.395 real 0m8.256s 00:23:04.395 user 0m21.459s 00:23:04.395 sys 0m1.330s 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.395 ************************************ 00:23:04.395 END TEST nvmf_shutdown_tc3 00:23:04.395 ************************************ 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.395 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:04.656 ************************************ 00:23:04.656 START TEST nvmf_shutdown_tc4 00:23:04.656 ************************************ 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.656 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:04.657 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:04.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:04.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:04.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.657 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:04.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:23:04.918 00:23:04.918 --- 10.0.0.2 ping statistics --- 00:23:04.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.918 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:04.918 00:23:04.918 --- 10.0.0.1 ping statistics --- 00:23:04.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.918 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:04.918 19:13:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3010432 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3010432 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3010432 ']' 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.918 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:04.918 [2024-11-26 19:13:22.090749] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:04.918 [2024-11-26 19:13:22.090815] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.179 [2024-11-26 19:13:22.183544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.179 [2024-11-26 19:13:22.214829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.179 [2024-11-26 19:13:22.214859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.179 [2024-11-26 19:13:22.214865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.179 [2024-11-26 19:13:22.214870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.179 [2024-11-26 19:13:22.214874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.179 [2024-11-26 19:13:22.216354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.179 [2024-11-26 19:13:22.216572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.179 [2024-11-26 19:13:22.216722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.179 [2024-11-26 19:13:22.216723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.749 [2024-11-26 19:13:22.926232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:05.749 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.009 19:13:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.009 Malloc1 00:23:06.009 [2024-11-26 19:13:23.036070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.009 Malloc2 00:23:06.009 Malloc3 00:23:06.009 Malloc4 00:23:06.009 Malloc5 00:23:06.009 Malloc6 00:23:06.269 Malloc7 00:23:06.269 Malloc8 00:23:06.269 Malloc9 00:23:06.269 Malloc10 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3010659 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:06.269 19:13:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:06.529 [2024-11-26 19:13:23.516521] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3010432 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3010432 ']' 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3010432 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010432 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010432' 00:23:11.837 killing process with pid 3010432 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3010432 00:23:11.837 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3010432 00:23:11.837 [2024-11-26 19:13:28.515702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7570 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.515746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7570 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7a40 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7f30 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7f30 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7f30 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.516299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f7f30 is same with the state(6) to be set 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 [2024-11-26 19:13:28.518901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 [2024-11-26 19:13:28.518920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.518925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.518930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with Write completed with error (sct=0, sc=8) 00:23:11.837 the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.518936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.518941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.518945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 [2024-11-26 19:13:28.518950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9b20 is same with the state(6) to be set 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 starting I/O failed: -6 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 [2024-11-26 19:13:28.519123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with Write completed with error (sct=0, sc=8) 00:23:11.837 the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.519141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.519146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.837 Write completed with error (sct=0, sc=8) 00:23:11.837 [2024-11-26 19:13:28.519151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.519156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.837 starting I/O failed: -6 00:23:11.837 [2024-11-26 19:13:28.519167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.837 [2024-11-26 19:13:28.519172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa010 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.519194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.519387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa500 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.519403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa500 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.519408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25fa500 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 [2024-11-26 19:13:28.519652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9650 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.519673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9650 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 [2024-11-26 19:13:28.519888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f87a0 is same with the state(6) to be set 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.519899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f87a0 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.519904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f87a0 is same with Write completed with error (sct=0, sc=8) 00:23:11.838 the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.519910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f87a0 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.520045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.838 [2024-11-26 19:13:28.520092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f8c70 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 [2024-11-26 19:13:28.520334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9160 is same with the state(6) to be set 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.520348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9160 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 [2024-11-26 19:13:28.520353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9160 is same with the state(6) to be set 00:23:11.838 [2024-11-26 19:13:28.520358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9160 is same with the state(6) to be set 00:23:11.838 starting I/O failed: -6 00:23:11.838 [2024-11-26 19:13:28.520364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f9160 is same with the state(6) to be set 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.838 starting I/O failed: -6 00:23:11.838 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 [2024-11-26 19:13:28.520966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 [2024-11-26 19:13:28.522492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.839 NVMe io qpair process completion error 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 Write completed with error (sct=0, sc=8) 00:23:11.839 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 [2024-11-26 19:13:28.523575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 [2024-11-26 19:13:28.524461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.840 Write completed with error (sct=0, sc=8) 00:23:11.840 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 [2024-11-26 19:13:28.525353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 [2024-11-26 19:13:28.526940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.841 NVMe io qpair process completion error 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.841 starting I/O failed: -6 00:23:11.841 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 [2024-11-26 19:13:28.528112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 [2024-11-26 19:13:28.528906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.842 starting I/O failed: -6 00:23:11.842 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 [2024-11-26 19:13:28.529838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.843 Write completed with error (sct=0, sc=8) 00:23:11.843 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 [2024-11-26 19:13:28.531732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.844 NVMe io qpair process completion error 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 [2024-11-26 19:13:28.532732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 [2024-11-26 19:13:28.533544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.844 starting I/O failed: -6 00:23:11.844 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 [2024-11-26 19:13:28.534478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.845 starting I/O failed: -6 00:23:11.845 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 [2024-11-26 19:13:28.537190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.846 NVMe io qpair process completion error 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 [2024-11-26 19:13:28.538270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.846 starting I/O failed: -6 00:23:11.846 starting I/O failed: -6 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 [2024-11-26 19:13:28.539235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.846 starting I/O failed: -6 00:23:11.846 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 [2024-11-26 19:13:28.540208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.847 Write completed with error (sct=0, sc=8) 00:23:11.847 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 [2024-11-26 19:13:28.541870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.848 NVMe io qpair process completion error 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 [2024-11-26 19:13:28.542903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.848 starting I/O failed: -6 00:23:11.848 starting I/O failed: -6 00:23:11.848 starting I/O failed: -6 00:23:11.848 starting I/O failed: -6 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.848 starting I/O failed: -6 00:23:11.848 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 [2024-11-26 19:13:28.544729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.849 starting I/O failed: -6 00:23:11.849 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 [2024-11-26 19:13:28.547749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.850 NVMe io qpair process completion error 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 [2024-11-26 19:13:28.548884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.850 starting I/O failed: -6 00:23:11.850 starting I/O failed: -6 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 [2024-11-26 19:13:28.549837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.850 starting I/O failed: -6 00:23:11.850 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 [2024-11-26 19:13:28.550768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.851 Write completed with error (sct=0, sc=8) 00:23:11.851 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 [2024-11-26 19:13:28.552185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.852 NVMe io qpair process completion error 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 [2024-11-26 19:13:28.553289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.852 starting I/O failed: -6 00:23:11.852 starting I/O failed: -6 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 starting I/O failed: -6 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.852 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 [2024-11-26 19:13:28.554247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 [2024-11-26 19:13:28.555182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.853 Write completed with error (sct=0, sc=8) 00:23:11.853 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 [2024-11-26 19:13:28.557228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.854 NVMe io qpair process completion error 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 [2024-11-26 19:13:28.558240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 starting I/O failed: -6 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.854 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 [2024-11-26 19:13:28.559129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 [2024-11-26 19:13:28.560041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.855 starting I/O failed: -6 00:23:11.855 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 [2024-11-26 19:13:28.561952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.856 NVMe io qpair process completion error 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 [2024-11-26 19:13:28.563190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.856 Write completed with error (sct=0, sc=8) 00:23:11.856 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 [2024-11-26 19:13:28.564002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 [2024-11-26 19:13:28.564928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.857 Write completed with error (sct=0, sc=8) 00:23:11.857 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 Write completed with error (sct=0, sc=8) 00:23:11.858 starting I/O failed: -6 00:23:11.858 [2024-11-26 19:13:28.566769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.858 NVMe io qpair process completion error 00:23:11.858 Initializing NVMe Controllers 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:11.858 Controller IO queue size 128, less than required. 00:23:11.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:11.859 Initialization complete. Launching workers. 00:23:11.859 ======================================================== 00:23:11.859 Latency(us) 00:23:11.859 Device Information : IOPS MiB/s Average min max 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1873.53 80.50 68337.68 696.74 127783.82 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1882.84 80.90 68019.17 719.28 129275.20 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1863.80 80.09 68742.59 906.75 124363.32 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1884.74 80.98 68002.95 688.96 125827.29 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1865.49 80.16 68023.19 672.14 119009.11 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1882.84 80.90 67415.12 885.01 124701.82 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1879.45 80.76 67556.91 810.27 124675.37 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1885.80 81.03 67355.20 758.98 117997.99 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1882.41 80.88 67510.57 702.52 124063.20 00:23:11.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1870.57 80.38 67963.51 532.67 119469.34 00:23:11.859 ======================================================== 00:23:11.859 Total : 18771.47 806.59 67891.43 532.67 129275.20 00:23:11.859 00:23:11.859 [2024-11-26 19:13:28.571888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c560 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.571936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218eae0 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.571967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cef0 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.571997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cbc0 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218e900 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c890 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d410 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d740 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218e720 is same with the state(6) to be set 00:23:11.859 [2024-11-26 19:13:28.572188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218da70 is same with the state(6) to be set 00:23:11.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:11.859 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3010659 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3010659 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3010659 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.801 rmmod nvme_tcp 00:23:12.801 rmmod nvme_fabrics 00:23:12.801 rmmod nvme_keyring 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3010432 ']' 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3010432 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3010432 ']' 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3010432 00:23:12.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3010432) - No such process 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3010432 is not found' 00:23:12.801 Process with pid 3010432 is not found 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.801 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.711 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.711 00:23:14.711 real 0m10.277s 00:23:14.711 user 0m27.944s 00:23:14.711 sys 0m4.055s 00:23:14.711 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.711 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.711 ************************************ 00:23:14.711 END TEST nvmf_shutdown_tc4 00:23:14.711 ************************************ 00:23:14.971 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:14.971 00:23:14.971 real 0m44.018s 00:23:14.971 user 1m47.603s 00:23:14.971 sys 0m14.106s 00:23:14.971 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.971 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.971 ************************************ 00:23:14.971 END TEST nvmf_shutdown 00:23:14.971 ************************************ 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:14.971 ************************************ 00:23:14.971 START TEST nvmf_nsid 00:23:14.971 ************************************ 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:14.971 * Looking for test storage... 00:23:14.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:14.971 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:15.232 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.233 --rc genhtml_branch_coverage=1 00:23:15.233 --rc genhtml_function_coverage=1 00:23:15.233 --rc genhtml_legend=1 00:23:15.233 --rc geninfo_all_blocks=1 00:23:15.233 --rc geninfo_unexecuted_blocks=1 00:23:15.233 00:23:15.233 ' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.233 --rc genhtml_branch_coverage=1 00:23:15.233 --rc genhtml_function_coverage=1 00:23:15.233 --rc genhtml_legend=1 00:23:15.233 --rc geninfo_all_blocks=1 00:23:15.233 --rc geninfo_unexecuted_blocks=1 00:23:15.233 00:23:15.233 ' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.233 --rc genhtml_branch_coverage=1 00:23:15.233 --rc genhtml_function_coverage=1 00:23:15.233 --rc genhtml_legend=1 00:23:15.233 --rc geninfo_all_blocks=1 00:23:15.233 --rc geninfo_unexecuted_blocks=1 00:23:15.233 00:23:15.233 ' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.233 --rc genhtml_branch_coverage=1 00:23:15.233 --rc genhtml_function_coverage=1 00:23:15.233 --rc genhtml_legend=1 00:23:15.233 --rc geninfo_all_blocks=1 00:23:15.233 --rc geninfo_unexecuted_blocks=1 00:23:15.233 00:23:15.233 ' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.233 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:23.374 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:23.374 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:23.374 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.374 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:23.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:23:23.375 00:23:23.375 --- 10.0.0.2 ping statistics --- 00:23:23.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.375 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:23.375 00:23:23.375 --- 10.0.0.1 ping statistics --- 00:23:23.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.375 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3016100 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3016100 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3016100 ']' 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.375 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.375 [2024-11-26 19:13:39.859770] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:23.375 [2024-11-26 19:13:39.859840] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.375 [2024-11-26 19:13:39.960713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.375 [2024-11-26 19:13:40.014191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.375 [2024-11-26 19:13:40.014243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.375 [2024-11-26 19:13:40.014258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.375 [2024-11-26 19:13:40.014265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.375 [2024-11-26 19:13:40.014271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.375 [2024-11-26 19:13:40.014907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3016200 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=bec99f79-d832-4a09-93d4-f254984ab955 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0642cd39-db99-4a8a-9227-faceaf2ff3ca 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fbdbb17d-5ccb-46c6-8fa2-f83ea59f895d 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.638 null0 00:23:23.638 null1 00:23:23.638 null2 00:23:23.638 [2024-11-26 19:13:40.776476] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:23.638 [2024-11-26 19:13:40.776545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016200 ] 00:23:23.638 [2024-11-26 19:13:40.778106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.638 [2024-11-26 19:13:40.802445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.638 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3016200 /var/tmp/tgt2.sock 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3016200 ']' 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:23.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.639 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:23.900 [2024-11-26 19:13:40.867794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.900 [2024-11-26 19:13:40.920628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.160 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.160 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:24.160 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:24.421 [2024-11-26 19:13:41.486533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.421 [2024-11-26 19:13:41.502726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:24.421 nvme0n1 nvme0n2 00:23:24.421 nvme1n1 00:23:24.421 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:24.421 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:24.421 19:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:25.807 19:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid bec99f79-d832-4a09-93d4-f254984ab955 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bec99f79d8324a0993d4f254984ab955 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BEC99F79D8324A0993D4F254984AB955 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BEC99F79D8324A0993D4F254984AB955 == \B\E\C\9\9\F\7\9\D\8\3\2\4\A\0\9\9\3\D\4\F\2\5\4\9\8\4\A\B\9\5\5 ]] 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0642cd39-db99-4a8a-9227-faceaf2ff3ca 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0642cd39db994a8a9227faceaf2ff3ca 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0642CD39DB994A8A9227FACEAF2FF3CA 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0642CD39DB994A8A9227FACEAF2FF3CA == \0\6\4\2\C\D\3\9\D\B\9\9\4\A\8\A\9\2\2\7\F\A\C\E\A\F\2\F\F\3\C\A ]] 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fbdbb17d-5ccb-46c6-8fa2-f83ea59f895d 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fbdbb17d5ccb46c68fa2f83ea59f895d 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FBDBB17D5CCB46C68FA2F83EA59F895D 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FBDBB17D5CCB46C68FA2F83EA59F895D == \F\B\D\B\B\1\7\D\5\C\C\B\4\6\C\6\8\F\A\2\F\8\3\E\A\5\9\F\8\9\5\D ]] 00:23:27.194 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3016200 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3016200 ']' 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3016200 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016200 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016200' 00:23:27.455 killing process with pid 3016200 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3016200 00:23:27.455 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3016200 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.715 rmmod nvme_tcp 00:23:27.715 rmmod nvme_fabrics 00:23:27.715 rmmod nvme_keyring 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3016100 ']' 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3016100 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3016100 ']' 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3016100 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016100 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016100' 00:23:27.715 killing process with pid 3016100 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3016100 00:23:27.715 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3016100 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.976 19:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.889 00:23:29.889 real 0m15.001s 00:23:29.889 user 0m11.437s 00:23:29.889 sys 0m6.927s 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:29.889 ************************************ 00:23:29.889 END TEST nvmf_nsid 00:23:29.889 ************************************ 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:29.889 00:23:29.889 real 13m3.859s 00:23:29.889 user 27m16.140s 00:23:29.889 sys 3m55.485s 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.889 19:13:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.889 ************************************ 00:23:29.889 END TEST nvmf_target_extra 00:23:29.889 ************************************ 00:23:30.149 19:13:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.149 19:13:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.149 19:13:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.149 19:13:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.149 ************************************ 00:23:30.149 START TEST nvmf_host 00:23:30.149 ************************************ 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:30.149 * Looking for test storage... 00:23:30.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.149 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.410 --rc genhtml_branch_coverage=1 00:23:30.410 --rc genhtml_function_coverage=1 00:23:30.410 --rc genhtml_legend=1 00:23:30.410 --rc geninfo_all_blocks=1 00:23:30.410 --rc geninfo_unexecuted_blocks=1 00:23:30.410 00:23:30.410 ' 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.410 --rc genhtml_branch_coverage=1 00:23:30.410 --rc genhtml_function_coverage=1 00:23:30.410 --rc genhtml_legend=1 00:23:30.410 --rc geninfo_all_blocks=1 00:23:30.410 --rc geninfo_unexecuted_blocks=1 00:23:30.410 00:23:30.410 ' 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.410 --rc genhtml_branch_coverage=1 00:23:30.410 --rc genhtml_function_coverage=1 00:23:30.410 --rc genhtml_legend=1 00:23:30.410 --rc geninfo_all_blocks=1 00:23:30.410 --rc geninfo_unexecuted_blocks=1 00:23:30.410 00:23:30.410 ' 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.410 --rc genhtml_branch_coverage=1 00:23:30.410 --rc genhtml_function_coverage=1 00:23:30.410 --rc genhtml_legend=1 00:23:30.410 --rc geninfo_all_blocks=1 00:23:30.410 --rc geninfo_unexecuted_blocks=1 00:23:30.410 00:23:30.410 ' 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.410 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.411 ************************************ 00:23:30.411 START TEST nvmf_multicontroller 00:23:30.411 ************************************ 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:30.411 * Looking for test storage... 00:23:30.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.411 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.673 --rc genhtml_branch_coverage=1 00:23:30.673 --rc genhtml_function_coverage=1 00:23:30.673 --rc genhtml_legend=1 00:23:30.673 --rc geninfo_all_blocks=1 00:23:30.673 --rc geninfo_unexecuted_blocks=1 00:23:30.673 00:23:30.673 ' 00:23:30.673 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.673 --rc genhtml_branch_coverage=1 00:23:30.673 --rc genhtml_function_coverage=1 00:23:30.673 --rc genhtml_legend=1 00:23:30.674 --rc geninfo_all_blocks=1 00:23:30.674 --rc geninfo_unexecuted_blocks=1 00:23:30.674 00:23:30.674 ' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.674 --rc genhtml_branch_coverage=1 00:23:30.674 --rc genhtml_function_coverage=1 00:23:30.674 --rc genhtml_legend=1 00:23:30.674 --rc geninfo_all_blocks=1 00:23:30.674 --rc geninfo_unexecuted_blocks=1 00:23:30.674 00:23:30.674 ' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.674 --rc genhtml_branch_coverage=1 00:23:30.674 --rc genhtml_function_coverage=1 00:23:30.674 --rc genhtml_legend=1 00:23:30.674 --rc geninfo_all_blocks=1 00:23:30.674 --rc geninfo_unexecuted_blocks=1 00:23:30.674 00:23:30.674 ' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.674 19:13:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:38.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:38.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.817 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:38.818 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:38.818 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.818 19:13:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:23:38.818 00:23:38.818 --- 10.0.0.2 ping statistics --- 00:23:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.818 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:38.818 00:23:38.818 --- 10.0.0.1 ping statistics --- 00:23:38.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.818 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3021303 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3021303 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3021303 ']' 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.818 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.819 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.819 19:13:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 [2024-11-26 19:13:55.323488] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:38.819 [2024-11-26 19:13:55.323554] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.819 [2024-11-26 19:13:55.425211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.819 [2024-11-26 19:13:55.477179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.819 [2024-11-26 19:13:55.477237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.819 [2024-11-26 19:13:55.477246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.819 [2024-11-26 19:13:55.477253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.819 [2024-11-26 19:13:55.477259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.819 [2024-11-26 19:13:55.479156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.819 [2024-11-26 19:13:55.479294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.819 [2024-11-26 19:13:55.479432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.080 [2024-11-26 19:13:56.204782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.080 Malloc0 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.080 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.081 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.081 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.081 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.081 [2024-11-26 19:13:56.286621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 [2024-11-26 19:13:56.298480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 Malloc1 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3021656 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3021656 /var/tmp/bdevperf.sock 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3021656 ']' 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.342 19:13:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.284 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.284 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:40.284 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:40.284 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.284 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.544 NVMe0n1 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.544 1 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.544 request: 00:23:40.544 { 00:23:40.544 "name": "NVMe0", 00:23:40.544 "trtype": "tcp", 00:23:40.544 "traddr": "10.0.0.2", 00:23:40.544 "adrfam": "ipv4", 00:23:40.544 "trsvcid": "4420", 00:23:40.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.544 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:40.544 "hostaddr": "10.0.0.1", 00:23:40.544 "prchk_reftag": false, 00:23:40.544 "prchk_guard": false, 00:23:40.544 "hdgst": false, 00:23:40.544 "ddgst": false, 00:23:40.544 "allow_unrecognized_csi": false, 00:23:40.544 "method": "bdev_nvme_attach_controller", 00:23:40.544 "req_id": 1 00:23:40.544 } 00:23:40.544 Got JSON-RPC error response 00:23:40.544 response: 00:23:40.544 { 00:23:40.544 "code": -114, 00:23:40.544 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.544 } 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.544 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 request: 00:23:40.545 { 00:23:40.545 "name": "NVMe0", 00:23:40.545 "trtype": "tcp", 00:23:40.545 "traddr": "10.0.0.2", 00:23:40.545 "adrfam": "ipv4", 00:23:40.545 "trsvcid": "4420", 00:23:40.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.545 "hostaddr": "10.0.0.1", 00:23:40.545 "prchk_reftag": false, 00:23:40.545 "prchk_guard": false, 00:23:40.545 "hdgst": false, 00:23:40.545 "ddgst": false, 00:23:40.545 "allow_unrecognized_csi": false, 00:23:40.545 "method": "bdev_nvme_attach_controller", 00:23:40.545 "req_id": 1 00:23:40.545 } 00:23:40.545 Got JSON-RPC error response 00:23:40.545 response: 00:23:40.545 { 00:23:40.545 "code": -114, 00:23:40.545 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.545 } 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 request: 00:23:40.545 { 00:23:40.545 "name": "NVMe0", 00:23:40.545 "trtype": "tcp", 00:23:40.545 "traddr": "10.0.0.2", 00:23:40.545 "adrfam": "ipv4", 00:23:40.545 "trsvcid": "4420", 00:23:40.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.545 "hostaddr": "10.0.0.1", 00:23:40.545 "prchk_reftag": false, 00:23:40.545 "prchk_guard": false, 00:23:40.545 "hdgst": false, 00:23:40.545 "ddgst": false, 00:23:40.545 "multipath": "disable", 00:23:40.545 "allow_unrecognized_csi": false, 00:23:40.545 "method": "bdev_nvme_attach_controller", 00:23:40.545 "req_id": 1 00:23:40.545 } 00:23:40.545 Got JSON-RPC error response 00:23:40.545 response: 00:23:40.545 { 00:23:40.545 "code": -114, 00:23:40.545 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:40.545 } 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 request: 00:23:40.545 { 00:23:40.545 "name": "NVMe0", 00:23:40.545 "trtype": "tcp", 00:23:40.545 "traddr": "10.0.0.2", 00:23:40.545 "adrfam": "ipv4", 00:23:40.545 "trsvcid": "4420", 00:23:40.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.545 "hostaddr": "10.0.0.1", 00:23:40.545 "prchk_reftag": false, 00:23:40.545 "prchk_guard": false, 00:23:40.545 "hdgst": false, 00:23:40.545 "ddgst": false, 00:23:40.545 "multipath": "failover", 00:23:40.545 "allow_unrecognized_csi": false, 00:23:40.545 "method": "bdev_nvme_attach_controller", 00:23:40.545 "req_id": 1 00:23:40.545 } 00:23:40.545 Got JSON-RPC error response 00:23:40.545 response: 00:23:40.545 { 00:23:40.545 "code": -114, 00:23:40.545 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:40.545 } 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 NVMe0n1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:40.545 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.546 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.805 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.805 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:40.806 19:13:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.745 { 00:23:41.745 "results": [ 00:23:41.745 { 00:23:41.745 "job": "NVMe0n1", 00:23:41.745 "core_mask": "0x1", 00:23:41.745 "workload": "write", 00:23:41.745 "status": "finished", 00:23:41.745 "queue_depth": 128, 00:23:41.745 "io_size": 4096, 00:23:41.745 "runtime": 1.005596, 00:23:41.745 "iops": 28393.112144439714, 00:23:41.745 "mibps": 110.91059431421763, 00:23:41.745 "io_failed": 0, 00:23:41.745 "io_timeout": 0, 00:23:41.745 "avg_latency_us": 4497.669427477351, 00:23:41.745 "min_latency_us": 2102.6133333333332, 00:23:41.745 "max_latency_us": 15619.413333333334 00:23:41.745 } 00:23:41.745 ], 00:23:41.745 "core_count": 1 00:23:41.745 } 00:23:41.745 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:41.745 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.745 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3021656 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3021656 ']' 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3021656 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:42.005 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.006 19:13:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3021656 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3021656' 00:23:42.006 killing process with pid 3021656 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3021656 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3021656 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:42.006 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:42.006 [2024-11-26 19:13:56.422420] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:42.006 [2024-11-26 19:13:56.422494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3021656 ] 00:23:42.006 [2024-11-26 19:13:56.516372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.006 [2024-11-26 19:13:56.569408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.006 [2024-11-26 19:13:57.802108] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name ba7a16dd-b759-47eb-ba36-a047fabe0ea1 already exists 00:23:42.006 [2024-11-26 19:13:57.802139] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:ba7a16dd-b759-47eb-ba36-a047fabe0ea1 alias for bdev NVMe1n1 00:23:42.006 [2024-11-26 19:13:57.802148] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:42.006 Running I/O for 1 seconds... 00:23:42.006 28359.00 IOPS, 110.78 MiB/s 00:23:42.006 Latency(us) 00:23:42.006 [2024-11-26T18:13:59.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.006 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:42.006 NVMe0n1 : 1.01 28393.11 110.91 0.00 0.00 4497.67 2102.61 15619.41 00:23:42.006 [2024-11-26T18:13:59.219Z] =================================================================================================================== 00:23:42.006 [2024-11-26T18:13:59.219Z] Total : 28393.11 110.91 0.00 0.00 4497.67 2102.61 15619.41 00:23:42.006 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.006 00:23:42.006 Latency(us) 00:23:42.006 [2024-11-26T18:13:59.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.006 [2024-11-26T18:13:59.219Z] =================================================================================================================== 00:23:42.006 [2024-11-26T18:13:59.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.006 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.006 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.006 rmmod nvme_tcp 00:23:42.266 rmmod nvme_fabrics 00:23:42.266 rmmod nvme_keyring 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3021303 ']' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3021303 ']' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3021303' 00:23:42.266 killing process with pid 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3021303 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.266 19:13:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.809 00:23:44.809 real 0m14.097s 00:23:44.809 user 0m17.233s 00:23:44.809 sys 0m6.582s 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 ************************************ 00:23:44.809 END TEST nvmf_multicontroller 00:23:44.809 ************************************ 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.809 ************************************ 00:23:44.809 START TEST nvmf_aer 00:23:44.809 ************************************ 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:44.809 * Looking for test storage... 00:23:44.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.809 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:44.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.810 --rc genhtml_branch_coverage=1 00:23:44.810 --rc genhtml_function_coverage=1 00:23:44.810 --rc genhtml_legend=1 00:23:44.810 --rc geninfo_all_blocks=1 00:23:44.810 --rc geninfo_unexecuted_blocks=1 00:23:44.810 00:23:44.810 ' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:44.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.810 --rc genhtml_branch_coverage=1 00:23:44.810 --rc genhtml_function_coverage=1 00:23:44.810 --rc genhtml_legend=1 00:23:44.810 --rc geninfo_all_blocks=1 00:23:44.810 --rc geninfo_unexecuted_blocks=1 00:23:44.810 00:23:44.810 ' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:44.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.810 --rc genhtml_branch_coverage=1 00:23:44.810 --rc genhtml_function_coverage=1 00:23:44.810 --rc genhtml_legend=1 00:23:44.810 --rc geninfo_all_blocks=1 00:23:44.810 --rc geninfo_unexecuted_blocks=1 00:23:44.810 00:23:44.810 ' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:44.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.810 --rc genhtml_branch_coverage=1 00:23:44.810 --rc genhtml_function_coverage=1 00:23:44.810 --rc genhtml_legend=1 00:23:44.810 --rc geninfo_all_blocks=1 00:23:44.810 --rc geninfo_unexecuted_blocks=1 00:23:44.810 00:23:44.810 ' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.810 19:14:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.995 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.996 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.996 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.996 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:23:52.996 00:23:52.996 --- 10.0.0.2 ping statistics --- 00:23:52.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.996 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:23:52.996 00:23:52.996 --- 10.0.0.1 ping statistics --- 00:23:52.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.996 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3026334 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3026334 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3026334 ']' 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.996 19:14:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.996 [2024-11-26 19:14:09.451852] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:23:52.996 [2024-11-26 19:14:09.451922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.996 [2024-11-26 19:14:09.553669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.996 [2024-11-26 19:14:09.607294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.996 [2024-11-26 19:14:09.607351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.996 [2024-11-26 19:14:09.607361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.996 [2024-11-26 19:14:09.607368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.996 [2024-11-26 19:14:09.607375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.996 [2024-11-26 19:14:09.609836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.996 [2024-11-26 19:14:09.609994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.996 [2024-11-26 19:14:09.610162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.996 [2024-11-26 19:14:09.610227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 [2024-11-26 19:14:10.316175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 Malloc0 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 [2024-11-26 19:14:10.394187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.258 [ 00:23:53.258 { 00:23:53.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.258 "subtype": "Discovery", 00:23:53.258 "listen_addresses": [], 00:23:53.258 "allow_any_host": true, 00:23:53.258 "hosts": [] 00:23:53.258 }, 00:23:53.258 { 00:23:53.258 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.258 "subtype": "NVMe", 00:23:53.258 "listen_addresses": [ 00:23:53.258 { 00:23:53.258 "trtype": "TCP", 00:23:53.258 "adrfam": "IPv4", 00:23:53.258 "traddr": "10.0.0.2", 00:23:53.258 "trsvcid": "4420" 00:23:53.258 } 00:23:53.258 ], 00:23:53.258 "allow_any_host": true, 00:23:53.258 "hosts": [], 00:23:53.258 "serial_number": "SPDK00000000000001", 00:23:53.258 "model_number": "SPDK bdev Controller", 00:23:53.258 "max_namespaces": 2, 00:23:53.258 "min_cntlid": 1, 00:23:53.258 "max_cntlid": 65519, 00:23:53.258 "namespaces": [ 00:23:53.258 { 00:23:53.258 "nsid": 1, 00:23:53.258 "bdev_name": "Malloc0", 00:23:53.258 "name": "Malloc0", 00:23:53.258 "nguid": "6CDB5DE715F54417878677D7DF3B7197", 00:23:53.258 "uuid": "6cdb5de7-15f5-4417-8786-77d7df3b7197" 00:23:53.258 } 00:23:53.258 ] 00:23:53.258 } 00:23:53.258 ] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3026539 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:53.258 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:53.518 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.779 Malloc1 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.779 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.779 Asynchronous Event Request test 00:23:53.779 Attaching to 10.0.0.2 00:23:53.779 Attached to 10.0.0.2 00:23:53.779 Registering asynchronous event callbacks... 00:23:53.779 Starting namespace attribute notice tests for all controllers... 00:23:53.779 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:53.779 aer_cb - Changed Namespace 00:23:53.780 Cleaning up... 00:23:53.780 [ 00:23:53.780 { 00:23:53.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.780 "subtype": "Discovery", 00:23:53.780 "listen_addresses": [], 00:23:53.780 "allow_any_host": true, 00:23:53.780 "hosts": [] 00:23:53.780 }, 00:23:53.780 { 00:23:53.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.780 "subtype": "NVMe", 00:23:53.780 "listen_addresses": [ 00:23:53.780 { 00:23:53.780 "trtype": "TCP", 00:23:53.780 "adrfam": "IPv4", 00:23:53.780 "traddr": "10.0.0.2", 00:23:53.780 "trsvcid": "4420" 00:23:53.780 } 00:23:53.780 ], 00:23:53.780 "allow_any_host": true, 00:23:53.780 "hosts": [], 00:23:53.780 "serial_number": "SPDK00000000000001", 00:23:53.780 "model_number": "SPDK bdev Controller", 00:23:53.780 "max_namespaces": 2, 00:23:53.780 "min_cntlid": 1, 00:23:53.780 "max_cntlid": 65519, 00:23:53.780 "namespaces": [ 00:23:53.780 { 00:23:53.780 "nsid": 1, 00:23:53.780 "bdev_name": "Malloc0", 00:23:53.780 "name": "Malloc0", 00:23:53.780 "nguid": "6CDB5DE715F54417878677D7DF3B7197", 00:23:53.780 "uuid": "6cdb5de7-15f5-4417-8786-77d7df3b7197" 00:23:53.780 }, 00:23:53.780 { 00:23:53.780 "nsid": 2, 00:23:53.780 "bdev_name": "Malloc1", 00:23:53.780 "name": "Malloc1", 00:23:53.780 "nguid": "79581BBF18AF4D61BB807C6E7FDB5D7D", 00:23:53.780 "uuid": "79581bbf-18af-4d61-bb80-7c6e7fdb5d7d" 00:23:53.780 } 00:23:53.780 ] 00:23:53.780 } 00:23:53.780 ] 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3026539 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.780 rmmod nvme_tcp 00:23:53.780 rmmod nvme_fabrics 00:23:53.780 rmmod nvme_keyring 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3026334 ']' 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3026334 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3026334 ']' 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3026334 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.780 19:14:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3026334 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3026334' 00:23:54.040 killing process with pid 3026334 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3026334 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3026334 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.040 19:14:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.584 00:23:56.584 real 0m11.652s 00:23:56.584 user 0m8.594s 00:23:56.584 sys 0m6.178s 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.584 ************************************ 00:23:56.584 END TEST nvmf_aer 00:23:56.584 ************************************ 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.584 ************************************ 00:23:56.584 START TEST nvmf_async_init 00:23:56.584 ************************************ 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.584 * Looking for test storage... 00:23:56.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.584 --rc genhtml_branch_coverage=1 00:23:56.584 --rc genhtml_function_coverage=1 00:23:56.584 --rc genhtml_legend=1 00:23:56.584 --rc geninfo_all_blocks=1 00:23:56.584 --rc geninfo_unexecuted_blocks=1 00:23:56.584 00:23:56.584 ' 00:23:56.584 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:56.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.584 --rc genhtml_branch_coverage=1 00:23:56.584 --rc genhtml_function_coverage=1 00:23:56.584 --rc genhtml_legend=1 00:23:56.584 --rc geninfo_all_blocks=1 00:23:56.584 --rc geninfo_unexecuted_blocks=1 00:23:56.584 00:23:56.584 ' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:56.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.585 --rc genhtml_branch_coverage=1 00:23:56.585 --rc genhtml_function_coverage=1 00:23:56.585 --rc genhtml_legend=1 00:23:56.585 --rc geninfo_all_blocks=1 00:23:56.585 --rc geninfo_unexecuted_blocks=1 00:23:56.585 00:23:56.585 ' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:56.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.585 --rc genhtml_branch_coverage=1 00:23:56.585 --rc genhtml_function_coverage=1 00:23:56.585 --rc genhtml_legend=1 00:23:56.585 --rc geninfo_all_blocks=1 00:23:56.585 --rc geninfo_unexecuted_blocks=1 00:23:56.585 00:23:56.585 ' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6b154d8c52dc4f8591b823c773ac0acf 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:56.585 19:14:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.822 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.823 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.823 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.823 19:14:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:24:04.823 00:24:04.823 --- 10.0.0.2 ping statistics --- 00:24:04.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.823 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:24:04.823 00:24:04.823 --- 10.0.0.1 ping statistics --- 00:24:04.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.823 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3030828 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3030828 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3030828 ']' 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.823 19:14:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.823 [2024-11-26 19:14:21.239691] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:04.823 [2024-11-26 19:14:21.239758] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.823 [2024-11-26 19:14:21.339812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.823 [2024-11-26 19:14:21.392087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.823 [2024-11-26 19:14:21.392141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.823 [2024-11-26 19:14:21.392150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.823 [2024-11-26 19:14:21.392169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.823 [2024-11-26 19:14:21.392176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.823 [2024-11-26 19:14:21.392932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 [2024-11-26 19:14:22.104650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 null0 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6b154d8c52dc4f8591b823c773ac0acf 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.084 [2024-11-26 19:14:22.165031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.084 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.345 nvme0n1 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.345 [ 00:24:05.345 { 00:24:05.345 "name": "nvme0n1", 00:24:05.345 "aliases": [ 00:24:05.345 "6b154d8c-52dc-4f85-91b8-23c773ac0acf" 00:24:05.345 ], 00:24:05.345 "product_name": "NVMe disk", 00:24:05.345 "block_size": 512, 00:24:05.345 "num_blocks": 2097152, 00:24:05.345 "uuid": "6b154d8c-52dc-4f85-91b8-23c773ac0acf", 00:24:05.345 "numa_id": 0, 00:24:05.345 "assigned_rate_limits": { 00:24:05.345 "rw_ios_per_sec": 0, 00:24:05.345 "rw_mbytes_per_sec": 0, 00:24:05.345 "r_mbytes_per_sec": 0, 00:24:05.345 "w_mbytes_per_sec": 0 00:24:05.345 }, 00:24:05.345 "claimed": false, 00:24:05.345 "zoned": false, 00:24:05.345 "supported_io_types": { 00:24:05.345 "read": true, 00:24:05.345 "write": true, 00:24:05.345 "unmap": false, 00:24:05.345 "flush": true, 00:24:05.345 "reset": true, 00:24:05.345 "nvme_admin": true, 00:24:05.345 "nvme_io": true, 00:24:05.345 "nvme_io_md": false, 00:24:05.345 "write_zeroes": true, 00:24:05.345 "zcopy": false, 00:24:05.345 "get_zone_info": false, 00:24:05.345 "zone_management": false, 00:24:05.345 "zone_append": false, 00:24:05.345 "compare": true, 00:24:05.345 "compare_and_write": true, 00:24:05.345 "abort": true, 00:24:05.345 "seek_hole": false, 00:24:05.345 "seek_data": false, 00:24:05.345 "copy": true, 00:24:05.345 "nvme_iov_md": false 00:24:05.345 }, 00:24:05.345 "memory_domains": [ 00:24:05.345 { 00:24:05.345 "dma_device_id": "system", 00:24:05.345 "dma_device_type": 1 00:24:05.345 } 00:24:05.345 ], 00:24:05.345 "driver_specific": { 00:24:05.345 "nvme": [ 00:24:05.345 { 00:24:05.345 "trid": { 00:24:05.345 "trtype": "TCP", 00:24:05.345 "adrfam": "IPv4", 00:24:05.345 "traddr": "10.0.0.2", 00:24:05.345 "trsvcid": "4420", 00:24:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.345 }, 00:24:05.345 "ctrlr_data": { 00:24:05.345 "cntlid": 1, 00:24:05.345 "vendor_id": "0x8086", 00:24:05.345 "model_number": "SPDK bdev Controller", 00:24:05.345 "serial_number": "00000000000000000000", 00:24:05.345 "firmware_revision": "25.01", 00:24:05.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.345 "oacs": { 00:24:05.345 "security": 0, 00:24:05.345 "format": 0, 00:24:05.345 "firmware": 0, 00:24:05.345 "ns_manage": 0 00:24:05.345 }, 00:24:05.345 "multi_ctrlr": true, 00:24:05.345 "ana_reporting": false 00:24:05.345 }, 00:24:05.345 "vs": { 00:24:05.345 "nvme_version": "1.3" 00:24:05.345 }, 00:24:05.345 "ns_data": { 00:24:05.345 "id": 1, 00:24:05.345 "can_share": true 00:24:05.345 } 00:24:05.345 } 00:24:05.345 ], 00:24:05.345 "mp_policy": "active_passive" 00:24:05.345 } 00:24:05.345 } 00:24:05.345 ] 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.345 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.345 [2024-11-26 19:14:22.441546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:05.345 [2024-11-26 19:14:22.441631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232fce0 (9): Bad file descriptor 00:24:05.607 [2024-11-26 19:14:22.573268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.607 [ 00:24:05.607 { 00:24:05.607 "name": "nvme0n1", 00:24:05.607 "aliases": [ 00:24:05.607 "6b154d8c-52dc-4f85-91b8-23c773ac0acf" 00:24:05.607 ], 00:24:05.607 "product_name": "NVMe disk", 00:24:05.607 "block_size": 512, 00:24:05.607 "num_blocks": 2097152, 00:24:05.607 "uuid": "6b154d8c-52dc-4f85-91b8-23c773ac0acf", 00:24:05.607 "numa_id": 0, 00:24:05.607 "assigned_rate_limits": { 00:24:05.607 "rw_ios_per_sec": 0, 00:24:05.607 "rw_mbytes_per_sec": 0, 00:24:05.607 "r_mbytes_per_sec": 0, 00:24:05.607 "w_mbytes_per_sec": 0 00:24:05.607 }, 00:24:05.607 "claimed": false, 00:24:05.607 "zoned": false, 00:24:05.607 "supported_io_types": { 00:24:05.607 "read": true, 00:24:05.607 "write": true, 00:24:05.607 "unmap": false, 00:24:05.607 "flush": true, 00:24:05.607 "reset": true, 00:24:05.607 "nvme_admin": true, 00:24:05.607 "nvme_io": true, 00:24:05.607 "nvme_io_md": false, 00:24:05.607 "write_zeroes": true, 00:24:05.607 "zcopy": false, 00:24:05.607 "get_zone_info": false, 00:24:05.607 "zone_management": false, 00:24:05.607 "zone_append": false, 00:24:05.607 "compare": true, 00:24:05.607 "compare_and_write": true, 00:24:05.607 "abort": true, 00:24:05.607 "seek_hole": false, 00:24:05.607 "seek_data": false, 00:24:05.607 "copy": true, 00:24:05.607 "nvme_iov_md": false 00:24:05.607 }, 00:24:05.607 "memory_domains": [ 00:24:05.607 { 00:24:05.607 "dma_device_id": "system", 00:24:05.607 "dma_device_type": 1 00:24:05.607 } 00:24:05.607 ], 00:24:05.607 "driver_specific": { 00:24:05.607 "nvme": [ 00:24:05.607 { 00:24:05.607 "trid": { 00:24:05.607 "trtype": "TCP", 00:24:05.607 "adrfam": "IPv4", 00:24:05.607 "traddr": "10.0.0.2", 00:24:05.607 "trsvcid": "4420", 00:24:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.607 }, 00:24:05.607 "ctrlr_data": { 00:24:05.607 "cntlid": 2, 00:24:05.607 "vendor_id": "0x8086", 00:24:05.607 "model_number": "SPDK bdev Controller", 00:24:05.607 "serial_number": "00000000000000000000", 00:24:05.607 "firmware_revision": "25.01", 00:24:05.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.607 "oacs": { 00:24:05.607 "security": 0, 00:24:05.607 "format": 0, 00:24:05.607 "firmware": 0, 00:24:05.607 "ns_manage": 0 00:24:05.607 }, 00:24:05.607 "multi_ctrlr": true, 00:24:05.607 "ana_reporting": false 00:24:05.607 }, 00:24:05.607 "vs": { 00:24:05.607 "nvme_version": "1.3" 00:24:05.607 }, 00:24:05.607 "ns_data": { 00:24:05.607 "id": 1, 00:24:05.607 "can_share": true 00:24:05.607 } 00:24:05.607 } 00:24:05.607 ], 00:24:05.607 "mp_policy": "active_passive" 00:24:05.607 } 00:24:05.607 } 00:24:05.607 ] 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tQQDtw54B0 00:24:05.607 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tQQDtw54B0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.tQQDtw54B0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 [2024-11-26 19:14:22.662236] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.608 [2024-11-26 19:14:22.662401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 [2024-11-26 19:14:22.686310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.608 nvme0n1 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 [ 00:24:05.608 { 00:24:05.608 "name": "nvme0n1", 00:24:05.608 "aliases": [ 00:24:05.608 "6b154d8c-52dc-4f85-91b8-23c773ac0acf" 00:24:05.608 ], 00:24:05.608 "product_name": "NVMe disk", 00:24:05.608 "block_size": 512, 00:24:05.608 "num_blocks": 2097152, 00:24:05.608 "uuid": "6b154d8c-52dc-4f85-91b8-23c773ac0acf", 00:24:05.608 "numa_id": 0, 00:24:05.608 "assigned_rate_limits": { 00:24:05.608 "rw_ios_per_sec": 0, 00:24:05.608 "rw_mbytes_per_sec": 0, 00:24:05.608 "r_mbytes_per_sec": 0, 00:24:05.608 "w_mbytes_per_sec": 0 00:24:05.608 }, 00:24:05.608 "claimed": false, 00:24:05.608 "zoned": false, 00:24:05.608 "supported_io_types": { 00:24:05.608 "read": true, 00:24:05.608 "write": true, 00:24:05.608 "unmap": false, 00:24:05.608 "flush": true, 00:24:05.608 "reset": true, 00:24:05.608 "nvme_admin": true, 00:24:05.608 "nvme_io": true, 00:24:05.608 "nvme_io_md": false, 00:24:05.608 "write_zeroes": true, 00:24:05.608 "zcopy": false, 00:24:05.608 "get_zone_info": false, 00:24:05.608 "zone_management": false, 00:24:05.608 "zone_append": false, 00:24:05.608 "compare": true, 00:24:05.608 "compare_and_write": true, 00:24:05.608 "abort": true, 00:24:05.608 "seek_hole": false, 00:24:05.608 "seek_data": false, 00:24:05.608 "copy": true, 00:24:05.608 "nvme_iov_md": false 00:24:05.608 }, 00:24:05.608 "memory_domains": [ 00:24:05.608 { 00:24:05.608 "dma_device_id": "system", 00:24:05.608 "dma_device_type": 1 00:24:05.608 } 00:24:05.608 ], 00:24:05.608 "driver_specific": { 00:24:05.608 "nvme": [ 00:24:05.608 { 00:24:05.608 "trid": { 00:24:05.608 "trtype": "TCP", 00:24:05.608 "adrfam": "IPv4", 00:24:05.608 "traddr": "10.0.0.2", 00:24:05.608 "trsvcid": "4421", 00:24:05.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:05.608 }, 00:24:05.608 "ctrlr_data": { 00:24:05.608 "cntlid": 3, 00:24:05.608 "vendor_id": "0x8086", 00:24:05.608 "model_number": "SPDK bdev Controller", 00:24:05.608 "serial_number": "00000000000000000000", 00:24:05.608 "firmware_revision": "25.01", 00:24:05.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.608 "oacs": { 00:24:05.608 "security": 0, 00:24:05.608 "format": 0, 00:24:05.608 "firmware": 0, 00:24:05.608 "ns_manage": 0 00:24:05.608 }, 00:24:05.608 "multi_ctrlr": true, 00:24:05.608 "ana_reporting": false 00:24:05.608 }, 00:24:05.608 "vs": { 00:24:05.608 "nvme_version": "1.3" 00:24:05.608 }, 00:24:05.608 "ns_data": { 00:24:05.608 "id": 1, 00:24:05.608 "can_share": true 00:24:05.608 } 00:24:05.608 } 00:24:05.608 ], 00:24:05.608 "mp_policy": "active_passive" 00:24:05.608 } 00:24:05.608 } 00:24:05.608 ] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.tQQDtw54B0 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.608 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.608 rmmod nvme_tcp 00:24:05.869 rmmod nvme_fabrics 00:24:05.869 rmmod nvme_keyring 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3030828 ']' 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3030828 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3030828 ']' 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3030828 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030828 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030828' 00:24:05.869 killing process with pid 3030828 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3030828 00:24:05.869 19:14:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3030828 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.130 19:14:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.043 00:24:08.043 real 0m11.827s 00:24:08.043 user 0m4.240s 00:24:08.043 sys 0m6.165s 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.043 ************************************ 00:24:08.043 END TEST nvmf_async_init 00:24:08.043 ************************************ 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.043 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.304 ************************************ 00:24:08.304 START TEST dma 00:24:08.304 ************************************ 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:08.304 * Looking for test storage... 00:24:08.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.304 --rc genhtml_branch_coverage=1 00:24:08.304 --rc genhtml_function_coverage=1 00:24:08.304 --rc genhtml_legend=1 00:24:08.304 --rc geninfo_all_blocks=1 00:24:08.304 --rc geninfo_unexecuted_blocks=1 00:24:08.304 00:24:08.304 ' 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.304 --rc genhtml_branch_coverage=1 00:24:08.304 --rc genhtml_function_coverage=1 00:24:08.304 --rc genhtml_legend=1 00:24:08.304 --rc geninfo_all_blocks=1 00:24:08.304 --rc geninfo_unexecuted_blocks=1 00:24:08.304 00:24:08.304 ' 00:24:08.304 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.304 --rc genhtml_branch_coverage=1 00:24:08.305 --rc genhtml_function_coverage=1 00:24:08.305 --rc genhtml_legend=1 00:24:08.305 --rc geninfo_all_blocks=1 00:24:08.305 --rc geninfo_unexecuted_blocks=1 00:24:08.305 00:24:08.305 ' 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.305 --rc genhtml_branch_coverage=1 00:24:08.305 --rc genhtml_function_coverage=1 00:24:08.305 --rc genhtml_legend=1 00:24:08.305 --rc geninfo_all_blocks=1 00:24:08.305 --rc geninfo_unexecuted_blocks=1 00:24:08.305 00:24:08.305 ' 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.305 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:08.567 00:24:08.567 real 0m0.237s 00:24:08.567 user 0m0.139s 00:24:08.567 sys 0m0.112s 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:08.567 ************************************ 00:24:08.567 END TEST dma 00:24:08.567 ************************************ 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.567 ************************************ 00:24:08.567 START TEST nvmf_identify 00:24:08.567 ************************************ 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:08.567 * Looking for test storage... 00:24:08.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.567 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.829 --rc genhtml_branch_coverage=1 00:24:08.829 --rc genhtml_function_coverage=1 00:24:08.829 --rc genhtml_legend=1 00:24:08.829 --rc geninfo_all_blocks=1 00:24:08.829 --rc geninfo_unexecuted_blocks=1 00:24:08.829 00:24:08.829 ' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.829 --rc genhtml_branch_coverage=1 00:24:08.829 --rc genhtml_function_coverage=1 00:24:08.829 --rc genhtml_legend=1 00:24:08.829 --rc geninfo_all_blocks=1 00:24:08.829 --rc geninfo_unexecuted_blocks=1 00:24:08.829 00:24:08.829 ' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.829 --rc genhtml_branch_coverage=1 00:24:08.829 --rc genhtml_function_coverage=1 00:24:08.829 --rc genhtml_legend=1 00:24:08.829 --rc geninfo_all_blocks=1 00:24:08.829 --rc geninfo_unexecuted_blocks=1 00:24:08.829 00:24:08.829 ' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.829 --rc genhtml_branch_coverage=1 00:24:08.829 --rc genhtml_function_coverage=1 00:24:08.829 --rc genhtml_legend=1 00:24:08.829 --rc geninfo_all_blocks=1 00:24:08.829 --rc geninfo_unexecuted_blocks=1 00:24:08.829 00:24:08.829 ' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.829 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.830 19:14:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.976 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.977 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:24:16.977 00:24:16.977 --- 10.0.0.2 ping statistics --- 00:24:16.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.977 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:16.977 00:24:16.977 --- 10.0.0.1 ping statistics --- 00:24:16.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.977 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3035496 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3035496 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3035496 ']' 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.977 19:14:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:16.977 [2024-11-26 19:14:33.479325] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:16.977 [2024-11-26 19:14:33.479416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.977 [2024-11-26 19:14:33.579098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.977 [2024-11-26 19:14:33.634365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.977 [2024-11-26 19:14:33.634421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.977 [2024-11-26 19:14:33.634430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.977 [2024-11-26 19:14:33.634437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.977 [2024-11-26 19:14:33.634444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.977 [2024-11-26 19:14:33.636458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.977 [2024-11-26 19:14:33.636619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.977 [2024-11-26 19:14:33.636780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.977 [2024-11-26 19:14:33.636781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 [2024-11-26 19:14:34.310540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.237 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 Malloc0 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.238 [2024-11-26 19:14:34.437038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.238 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.499 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.500 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:17.500 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.500 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.500 [ 00:24:17.500 { 00:24:17.500 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:17.500 "subtype": "Discovery", 00:24:17.500 "listen_addresses": [ 00:24:17.500 { 00:24:17.500 "trtype": "TCP", 00:24:17.500 "adrfam": "IPv4", 00:24:17.500 "traddr": "10.0.0.2", 00:24:17.500 "trsvcid": "4420" 00:24:17.500 } 00:24:17.500 ], 00:24:17.500 "allow_any_host": true, 00:24:17.500 "hosts": [] 00:24:17.500 }, 00:24:17.500 { 00:24:17.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.500 "subtype": "NVMe", 00:24:17.500 "listen_addresses": [ 00:24:17.500 { 00:24:17.500 "trtype": "TCP", 00:24:17.500 "adrfam": "IPv4", 00:24:17.500 "traddr": "10.0.0.2", 00:24:17.500 "trsvcid": "4420" 00:24:17.500 } 00:24:17.500 ], 00:24:17.500 "allow_any_host": true, 00:24:17.500 "hosts": [], 00:24:17.500 "serial_number": "SPDK00000000000001", 00:24:17.500 "model_number": "SPDK bdev Controller", 00:24:17.500 "max_namespaces": 32, 00:24:17.500 "min_cntlid": 1, 00:24:17.500 "max_cntlid": 65519, 00:24:17.500 "namespaces": [ 00:24:17.500 { 00:24:17.500 "nsid": 1, 00:24:17.500 "bdev_name": "Malloc0", 00:24:17.500 "name": "Malloc0", 00:24:17.500 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:17.500 "eui64": "ABCDEF0123456789", 00:24:17.500 "uuid": "545a7a8c-7956-4034-a4f5-2705afd2a705" 00:24:17.500 } 00:24:17.500 ] 00:24:17.500 } 00:24:17.500 ] 00:24:17.500 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.500 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:17.500 [2024-11-26 19:14:34.501586] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:17.500 [2024-11-26 19:14:34.501634] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035845 ] 00:24:17.500 [2024-11-26 19:14:34.557878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:17.500 [2024-11-26 19:14:34.557945] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.500 [2024-11-26 19:14:34.557950] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.500 [2024-11-26 19:14:34.557971] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.500 [2024-11-26 19:14:34.557982] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.500 [2024-11-26 19:14:34.561602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:17.500 [2024-11-26 19:14:34.561652] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa07690 0 00:24:17.500 [2024-11-26 19:14:34.569171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.500 [2024-11-26 19:14:34.569189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.500 [2024-11-26 19:14:34.569194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.500 [2024-11-26 19:14:34.569199] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.500 [2024-11-26 19:14:34.569249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.569262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.569267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.500 [2024-11-26 19:14:34.569286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.500 [2024-11-26 19:14:34.569313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.500 [2024-11-26 19:14:34.577176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 19:14:34.577187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 19:14:34.577191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.500 [2024-11-26 19:14:34.577210] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.500 [2024-11-26 19:14:34.577218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:17.500 [2024-11-26 19:14:34.577224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:17.500 [2024-11-26 19:14:34.577245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.500 [2024-11-26 19:14:34.577262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 19:14:34.577280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.500 [2024-11-26 19:14:34.577479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 19:14:34.577487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 19:14:34.577491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.500 [2024-11-26 19:14:34.577504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:17.500 [2024-11-26 19:14:34.577512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:17.500 [2024-11-26 19:14:34.577521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.500 [2024-11-26 19:14:34.577536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 19:14:34.577548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.500 [2024-11-26 19:14:34.577732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 19:14:34.577738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 19:14:34.577742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.500 [2024-11-26 19:14:34.577753] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:17.500 [2024-11-26 19:14:34.577762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.500 [2024-11-26 19:14:34.577769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.577782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.500 [2024-11-26 19:14:34.577789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 19:14:34.577800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.500 [2024-11-26 19:14:34.578009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 19:14:34.578016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 19:14:34.578020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.578024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.500 [2024-11-26 19:14:34.578030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.500 [2024-11-26 19:14:34.578040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.578044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.578048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.500 [2024-11-26 19:14:34.578055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.500 [2024-11-26 19:14:34.578065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.500 [2024-11-26 19:14:34.578276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.500 [2024-11-26 19:14:34.578283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.500 [2024-11-26 19:14:34.578287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.500 [2024-11-26 19:14:34.578291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.500 [2024-11-26 19:14:34.578296] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.500 [2024-11-26 19:14:34.578302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.501 [2024-11-26 19:14:34.578310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.501 [2024-11-26 19:14:34.578423] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:17.501 [2024-11-26 19:14:34.578429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.501 [2024-11-26 19:14:34.578439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.578455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.501 [2024-11-26 19:14:34.578466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.501 [2024-11-26 19:14:34.578677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.501 [2024-11-26 19:14:34.578684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.501 [2024-11-26 19:14:34.578688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.501 [2024-11-26 19:14:34.578697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.501 [2024-11-26 19:14:34.578711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.578726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.501 [2024-11-26 19:14:34.578737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.501 [2024-11-26 19:14:34.578951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.501 [2024-11-26 19:14:34.578958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.501 [2024-11-26 19:14:34.578962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.578965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.501 [2024-11-26 19:14:34.578971] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.501 [2024-11-26 19:14:34.578976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.578985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:17.501 [2024-11-26 19:14:34.578994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.579005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.501 [2024-11-26 19:14:34.579027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.501 [2024-11-26 19:14:34.579270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.501 [2024-11-26 19:14:34.579278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.501 [2024-11-26 19:14:34.579282] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579286] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa07690): datao=0, datal=4096, cccid=0 00:24:17.501 [2024-11-26 19:14:34.579291] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa69100) on tqpair(0xa07690): expected_datao=0, payload_size=4096 00:24:17.501 [2024-11-26 19:14:34.579296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579306] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579311] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.501 [2024-11-26 19:14:34.579478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.501 [2024-11-26 19:14:34.579482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.501 [2024-11-26 19:14:34.579497] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:17.501 [2024-11-26 19:14:34.579503] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:17.501 [2024-11-26 19:14:34.579507] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:17.501 [2024-11-26 19:14:34.579514] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:17.501 [2024-11-26 19:14:34.579522] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:17.501 [2024-11-26 19:14:34.579527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.579537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.579545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.501 [2024-11-26 19:14:34.579572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.501 [2024-11-26 19:14:34.579769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.501 [2024-11-26 19:14:34.579776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.501 [2024-11-26 19:14:34.579780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.501 [2024-11-26 19:14:34.579793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.501 [2024-11-26 19:14:34.579814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.501 [2024-11-26 19:14:34.579834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.501 [2024-11-26 19:14:34.579855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.501 [2024-11-26 19:14:34.579874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.579887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.501 [2024-11-26 19:14:34.579895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.579899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.579906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.501 [2024-11-26 19:14:34.579921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69100, cid 0, qid 0 00:24:17.501 [2024-11-26 19:14:34.579927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69280, cid 1, qid 0 00:24:17.501 [2024-11-26 19:14:34.579932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69400, cid 2, qid 0 00:24:17.501 [2024-11-26 19:14:34.579937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.501 [2024-11-26 19:14:34.579942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69700, cid 4, qid 0 00:24:17.501 [2024-11-26 19:14:34.580181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.501 [2024-11-26 19:14:34.580189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.501 [2024-11-26 19:14:34.580192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.580196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69700) on tqpair=0xa07690 00:24:17.501 [2024-11-26 19:14:34.580202] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:17.501 [2024-11-26 19:14:34.580208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:17.501 [2024-11-26 19:14:34.580219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.501 [2024-11-26 19:14:34.580224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa07690) 00:24:17.501 [2024-11-26 19:14:34.580231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.501 [2024-11-26 19:14:34.580241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69700, cid 4, qid 0 00:24:17.502 [2024-11-26 19:14:34.580469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.502 [2024-11-26 19:14:34.580476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.502 [2024-11-26 19:14:34.580480] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580484] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa07690): datao=0, datal=4096, cccid=4 00:24:17.502 [2024-11-26 19:14:34.580488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa69700) on tqpair(0xa07690): expected_datao=0, payload_size=4096 00:24:17.502 [2024-11-26 19:14:34.580493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580516] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.502 [2024-11-26 19:14:34.580668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.502 [2024-11-26 19:14:34.580671] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69700) on tqpair=0xa07690 00:24:17.502 [2024-11-26 19:14:34.580688] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:17.502 [2024-11-26 19:14:34.580717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa07690) 00:24:17.502 [2024-11-26 19:14:34.580729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.502 [2024-11-26 19:14:34.580736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.580744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa07690) 00:24:17.502 [2024-11-26 19:14:34.580753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.502 [2024-11-26 19:14:34.580769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69700, cid 4, qid 0 00:24:17.502 [2024-11-26 19:14:34.580774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69880, cid 5, qid 0 00:24:17.502 [2024-11-26 19:14:34.581046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.502 [2024-11-26 19:14:34.581053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.502 [2024-11-26 19:14:34.581056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.581061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa07690): datao=0, datal=1024, cccid=4 00:24:17.502 [2024-11-26 19:14:34.581065] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa69700) on tqpair(0xa07690): expected_datao=0, payload_size=1024 00:24:17.502 [2024-11-26 19:14:34.581070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.581077] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.581081] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.581087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.502 [2024-11-26 19:14:34.581094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.502 [2024-11-26 19:14:34.581097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.581101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69880) on tqpair=0xa07690 00:24:17.502 [2024-11-26 19:14:34.625169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.502 [2024-11-26 19:14:34.625189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.502 [2024-11-26 19:14:34.625193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.625198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69700) on tqpair=0xa07690 00:24:17.502 [2024-11-26 19:14:34.625216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.625221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa07690) 00:24:17.502 [2024-11-26 19:14:34.625230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.502 [2024-11-26 19:14:34.625251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69700, cid 4, qid 0 00:24:17.502 [2024-11-26 19:14:34.625362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.502 [2024-11-26 19:14:34.625369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.502 [2024-11-26 19:14:34.625373] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.625377] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa07690): datao=0, datal=3072, cccid=4 00:24:17.502 [2024-11-26 19:14:34.625381] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa69700) on tqpair(0xa07690): expected_datao=0, payload_size=3072 00:24:17.502 [2024-11-26 19:14:34.625386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.625404] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.625408] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.502 [2024-11-26 19:14:34.667341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.502 [2024-11-26 19:14:34.667344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69700) on tqpair=0xa07690 00:24:17.502 [2024-11-26 19:14:34.667361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa07690) 00:24:17.502 [2024-11-26 19:14:34.667379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.502 [2024-11-26 19:14:34.667398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69700, cid 4, qid 0 00:24:17.502 [2024-11-26 19:14:34.667629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.502 [2024-11-26 19:14:34.667635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.502 [2024-11-26 19:14:34.667639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667642] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa07690): datao=0, datal=8, cccid=4 00:24:17.502 [2024-11-26 19:14:34.667647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa69700) on tqpair(0xa07690): expected_datao=0, payload_size=8 00:24:17.502 [2024-11-26 19:14:34.667651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667658] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.502 [2024-11-26 19:14:34.667662] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.768 [2024-11-26 19:14:34.713172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.768 [2024-11-26 19:14:34.713185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.768 [2024-11-26 19:14:34.713189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.768 [2024-11-26 19:14:34.713193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69700) on tqpair=0xa07690 00:24:17.768 ===================================================== 00:24:17.768 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:17.768 ===================================================== 00:24:17.768 Controller Capabilities/Features 00:24:17.768 ================================ 00:24:17.768 Vendor ID: 0000 00:24:17.768 Subsystem Vendor ID: 0000 00:24:17.768 Serial Number: .................... 00:24:17.768 Model Number: ........................................ 00:24:17.768 Firmware Version: 25.01 00:24:17.768 Recommended Arb Burst: 0 00:24:17.768 IEEE OUI Identifier: 00 00 00 00:24:17.768 Multi-path I/O 00:24:17.768 May have multiple subsystem ports: No 00:24:17.768 May have multiple controllers: No 00:24:17.768 Associated with SR-IOV VF: No 00:24:17.768 Max Data Transfer Size: 131072 00:24:17.768 Max Number of Namespaces: 0 00:24:17.768 Max Number of I/O Queues: 1024 00:24:17.768 NVMe Specification Version (VS): 1.3 00:24:17.768 NVMe Specification Version (Identify): 1.3 00:24:17.768 Maximum Queue Entries: 128 00:24:17.768 Contiguous Queues Required: Yes 00:24:17.768 Arbitration Mechanisms Supported 00:24:17.768 Weighted Round Robin: Not Supported 00:24:17.768 Vendor Specific: Not Supported 00:24:17.768 Reset Timeout: 15000 ms 00:24:17.768 Doorbell Stride: 4 bytes 00:24:17.768 NVM Subsystem Reset: Not Supported 00:24:17.768 Command Sets Supported 00:24:17.768 NVM Command Set: Supported 00:24:17.768 Boot Partition: Not Supported 00:24:17.768 Memory Page Size Minimum: 4096 bytes 00:24:17.768 Memory Page Size Maximum: 4096 bytes 00:24:17.768 Persistent Memory Region: Not Supported 00:24:17.768 Optional Asynchronous Events Supported 00:24:17.768 Namespace Attribute Notices: Not Supported 00:24:17.768 Firmware Activation Notices: Not Supported 00:24:17.768 ANA Change Notices: Not Supported 00:24:17.768 PLE Aggregate Log Change Notices: Not Supported 00:24:17.768 LBA Status Info Alert Notices: Not Supported 00:24:17.768 EGE Aggregate Log Change Notices: Not Supported 00:24:17.768 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.768 Zone Descriptor Change Notices: Not Supported 00:24:17.769 Discovery Log Change Notices: Supported 00:24:17.769 Controller Attributes 00:24:17.769 128-bit Host Identifier: Not Supported 00:24:17.769 Non-Operational Permissive Mode: Not Supported 00:24:17.769 NVM Sets: Not Supported 00:24:17.769 Read Recovery Levels: Not Supported 00:24:17.769 Endurance Groups: Not Supported 00:24:17.769 Predictable Latency Mode: Not Supported 00:24:17.769 Traffic Based Keep ALive: Not Supported 00:24:17.769 Namespace Granularity: Not Supported 00:24:17.769 SQ Associations: Not Supported 00:24:17.769 UUID List: Not Supported 00:24:17.769 Multi-Domain Subsystem: Not Supported 00:24:17.769 Fixed Capacity Management: Not Supported 00:24:17.769 Variable Capacity Management: Not Supported 00:24:17.769 Delete Endurance Group: Not Supported 00:24:17.769 Delete NVM Set: Not Supported 00:24:17.769 Extended LBA Formats Supported: Not Supported 00:24:17.769 Flexible Data Placement Supported: Not Supported 00:24:17.769 00:24:17.769 Controller Memory Buffer Support 00:24:17.769 ================================ 00:24:17.769 Supported: No 00:24:17.769 00:24:17.769 Persistent Memory Region Support 00:24:17.769 ================================ 00:24:17.769 Supported: No 00:24:17.769 00:24:17.769 Admin Command Set Attributes 00:24:17.769 ============================ 00:24:17.769 Security Send/Receive: Not Supported 00:24:17.769 Format NVM: Not Supported 00:24:17.769 Firmware Activate/Download: Not Supported 00:24:17.769 Namespace Management: Not Supported 00:24:17.769 Device Self-Test: Not Supported 00:24:17.769 Directives: Not Supported 00:24:17.769 NVMe-MI: Not Supported 00:24:17.769 Virtualization Management: Not Supported 00:24:17.769 Doorbell Buffer Config: Not Supported 00:24:17.769 Get LBA Status Capability: Not Supported 00:24:17.769 Command & Feature Lockdown Capability: Not Supported 00:24:17.769 Abort Command Limit: 1 00:24:17.769 Async Event Request Limit: 4 00:24:17.769 Number of Firmware Slots: N/A 00:24:17.769 Firmware Slot 1 Read-Only: N/A 00:24:17.769 Firmware Activation Without Reset: N/A 00:24:17.769 Multiple Update Detection Support: N/A 00:24:17.769 Firmware Update Granularity: No Information Provided 00:24:17.769 Per-Namespace SMART Log: No 00:24:17.769 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.769 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:17.769 Command Effects Log Page: Not Supported 00:24:17.769 Get Log Page Extended Data: Supported 00:24:17.769 Telemetry Log Pages: Not Supported 00:24:17.769 Persistent Event Log Pages: Not Supported 00:24:17.769 Supported Log Pages Log Page: May Support 00:24:17.769 Commands Supported & Effects Log Page: Not Supported 00:24:17.769 Feature Identifiers & Effects Log Page:May Support 00:24:17.769 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.769 Data Area 4 for Telemetry Log: Not Supported 00:24:17.769 Error Log Page Entries Supported: 128 00:24:17.769 Keep Alive: Not Supported 00:24:17.769 00:24:17.769 NVM Command Set Attributes 00:24:17.769 ========================== 00:24:17.769 Submission Queue Entry Size 00:24:17.769 Max: 1 00:24:17.769 Min: 1 00:24:17.769 Completion Queue Entry Size 00:24:17.769 Max: 1 00:24:17.769 Min: 1 00:24:17.769 Number of Namespaces: 0 00:24:17.769 Compare Command: Not Supported 00:24:17.769 Write Uncorrectable Command: Not Supported 00:24:17.769 Dataset Management Command: Not Supported 00:24:17.769 Write Zeroes Command: Not Supported 00:24:17.769 Set Features Save Field: Not Supported 00:24:17.769 Reservations: Not Supported 00:24:17.769 Timestamp: Not Supported 00:24:17.769 Copy: Not Supported 00:24:17.769 Volatile Write Cache: Not Present 00:24:17.769 Atomic Write Unit (Normal): 1 00:24:17.769 Atomic Write Unit (PFail): 1 00:24:17.769 Atomic Compare & Write Unit: 1 00:24:17.769 Fused Compare & Write: Supported 00:24:17.769 Scatter-Gather List 00:24:17.769 SGL Command Set: Supported 00:24:17.769 SGL Keyed: Supported 00:24:17.769 SGL Bit Bucket Descriptor: Not Supported 00:24:17.769 SGL Metadata Pointer: Not Supported 00:24:17.769 Oversized SGL: Not Supported 00:24:17.769 SGL Metadata Address: Not Supported 00:24:17.769 SGL Offset: Supported 00:24:17.769 Transport SGL Data Block: Not Supported 00:24:17.769 Replay Protected Memory Block: Not Supported 00:24:17.769 00:24:17.769 Firmware Slot Information 00:24:17.769 ========================= 00:24:17.769 Active slot: 0 00:24:17.769 00:24:17.769 00:24:17.769 Error Log 00:24:17.769 ========= 00:24:17.769 00:24:17.769 Active Namespaces 00:24:17.769 ================= 00:24:17.769 Discovery Log Page 00:24:17.769 ================== 00:24:17.769 Generation Counter: 2 00:24:17.769 Number of Records: 2 00:24:17.769 Record Format: 0 00:24:17.769 00:24:17.769 Discovery Log Entry 0 00:24:17.769 ---------------------- 00:24:17.769 Transport Type: 3 (TCP) 00:24:17.769 Address Family: 1 (IPv4) 00:24:17.769 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:17.769 Entry Flags: 00:24:17.769 Duplicate Returned Information: 1 00:24:17.769 Explicit Persistent Connection Support for Discovery: 1 00:24:17.769 Transport Requirements: 00:24:17.769 Secure Channel: Not Required 00:24:17.769 Port ID: 0 (0x0000) 00:24:17.770 Controller ID: 65535 (0xffff) 00:24:17.770 Admin Max SQ Size: 128 00:24:17.770 Transport Service Identifier: 4420 00:24:17.770 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:17.770 Transport Address: 10.0.0.2 00:24:17.770 Discovery Log Entry 1 00:24:17.770 ---------------------- 00:24:17.770 Transport Type: 3 (TCP) 00:24:17.770 Address Family: 1 (IPv4) 00:24:17.770 Subsystem Type: 2 (NVM Subsystem) 00:24:17.770 Entry Flags: 00:24:17.770 Duplicate Returned Information: 0 00:24:17.770 Explicit Persistent Connection Support for Discovery: 0 00:24:17.770 Transport Requirements: 00:24:17.770 Secure Channel: Not Required 00:24:17.770 Port ID: 0 (0x0000) 00:24:17.770 Controller ID: 65535 (0xffff) 00:24:17.770 Admin Max SQ Size: 128 00:24:17.770 Transport Service Identifier: 4420 00:24:17.770 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:17.770 Transport Address: 10.0.0.2 [2024-11-26 19:14:34.713305] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:17.770 [2024-11-26 19:14:34.713318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69100) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.770 [2024-11-26 19:14:34.713332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69280) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.770 [2024-11-26 19:14:34.713341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69400) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.770 [2024-11-26 19:14:34.713351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.770 [2024-11-26 19:14:34.713366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.770 [2024-11-26 19:14:34.713382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.770 [2024-11-26 19:14:34.713398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.770 [2024-11-26 19:14:34.713619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.770 [2024-11-26 19:14:34.713627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.770 [2024-11-26 19:14:34.713631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.770 [2024-11-26 19:14:34.713662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.770 [2024-11-26 19:14:34.713676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.770 [2024-11-26 19:14:34.713918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.770 [2024-11-26 19:14:34.713925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.770 [2024-11-26 19:14:34.713929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.713939] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:17.770 [2024-11-26 19:14:34.713944] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:17.770 [2024-11-26 19:14:34.713955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.713962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.770 [2024-11-26 19:14:34.713969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.770 [2024-11-26 19:14:34.713980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.770 [2024-11-26 19:14:34.714220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.770 [2024-11-26 19:14:34.714227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.770 [2024-11-26 19:14:34.714230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.714234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.714245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.714249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.714253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.770 [2024-11-26 19:14:34.714260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.770 [2024-11-26 19:14:34.714271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.770 [2024-11-26 19:14:34.714481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.770 [2024-11-26 19:14:34.714488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.770 [2024-11-26 19:14:34.714492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.714496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.770 [2024-11-26 19:14:34.714505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.770 [2024-11-26 19:14:34.714509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.714513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.714520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.714530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.714772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.714781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.714785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.714791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.714803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.714812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.714816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.714823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.714833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.715077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.715083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.715087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.715101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.715115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.715125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.715327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.715337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.715340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.715354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.715372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.715383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.715547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.715555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.715562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.715575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.715590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.715604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.715780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.715788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.715791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.715805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.715815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.715822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.715833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.716034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.716041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.716045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.716058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.716073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.716083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.716336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.716343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.716346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.771 [2024-11-26 19:14:34.716361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.771 [2024-11-26 19:14:34.716375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.771 [2024-11-26 19:14:34.716385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.771 [2024-11-26 19:14:34.716570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.771 [2024-11-26 19:14:34.716576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.771 [2024-11-26 19:14:34.716580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.771 [2024-11-26 19:14:34.716584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.772 [2024-11-26 19:14:34.716594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.716598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.716602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.772 [2024-11-26 19:14:34.716609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.772 [2024-11-26 19:14:34.716619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.772 [2024-11-26 19:14:34.716838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.772 [2024-11-26 19:14:34.716844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.772 [2024-11-26 19:14:34.716847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.716851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.772 [2024-11-26 19:14:34.716862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.716865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.716869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.772 [2024-11-26 19:14:34.716878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.772 [2024-11-26 19:14:34.716889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.772 [2024-11-26 19:14:34.717141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.772 [2024-11-26 19:14:34.717150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.772 [2024-11-26 19:14:34.717153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.717157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.772 [2024-11-26 19:14:34.721178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.721182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.721186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa07690) 00:24:17.772 [2024-11-26 19:14:34.721193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.772 [2024-11-26 19:14:34.721204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa69580, cid 3, qid 0 00:24:17.772 [2024-11-26 19:14:34.721436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.772 [2024-11-26 19:14:34.721444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.772 [2024-11-26 19:14:34.721447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.721451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa69580) on tqpair=0xa07690 00:24:17.772 [2024-11-26 19:14:34.721463] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:17.772 00:24:17.772 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:17.772 [2024-11-26 19:14:34.770914] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:17.772 [2024-11-26 19:14:34.771001] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035860 ] 00:24:17.772 [2024-11-26 19:14:34.828379] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:17.772 [2024-11-26 19:14:34.828439] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:17.772 [2024-11-26 19:14:34.828444] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:17.772 [2024-11-26 19:14:34.828465] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:17.772 [2024-11-26 19:14:34.828474] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:17.772 [2024-11-26 19:14:34.832453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:17.772 [2024-11-26 19:14:34.832499] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x724690 0 00:24:17.772 [2024-11-26 19:14:34.832659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:17.772 [2024-11-26 19:14:34.832667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:17.772 [2024-11-26 19:14:34.832672] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:17.772 [2024-11-26 19:14:34.832675] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:17.772 [2024-11-26 19:14:34.832706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.832716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.832720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.772 [2024-11-26 19:14:34.832733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:17.772 [2024-11-26 19:14:34.832749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.772 [2024-11-26 19:14:34.840171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.772 [2024-11-26 19:14:34.840180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.772 [2024-11-26 19:14:34.840184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.840189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.772 [2024-11-26 19:14:34.840201] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:17.772 [2024-11-26 19:14:34.840208] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:17.772 [2024-11-26 19:14:34.840214] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:17.772 [2024-11-26 19:14:34.840229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.840233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.840237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.772 [2024-11-26 19:14:34.840245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.772 [2024-11-26 19:14:34.840259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.772 [2024-11-26 19:14:34.840336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.772 [2024-11-26 19:14:34.840343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.772 [2024-11-26 19:14:34.840346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.840350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.772 [2024-11-26 19:14:34.840357] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:17.772 [2024-11-26 19:14:34.840365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:17.772 [2024-11-26 19:14:34.840372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.772 [2024-11-26 19:14:34.840376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.840386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.840397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.840509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.840515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.840519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.840528] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:17.773 [2024-11-26 19:14:34.840538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.840544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.840562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.840573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.840657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.840663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.840667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.840676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.840685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.840700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.840710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.840808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.840815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.840818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.840827] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:17.773 [2024-11-26 19:14:34.840832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.840840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.840948] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:17.773 [2024-11-26 19:14:34.840953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.840961] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.840969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.840975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.840986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.841053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.841059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.841063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.841071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:17.773 [2024-11-26 19:14:34.841081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.841098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.841109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.841203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.841210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.841214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.841222] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:17.773 [2024-11-26 19:14:34.841227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:17.773 [2024-11-26 19:14:34.841235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:17.773 [2024-11-26 19:14:34.841246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:17.773 [2024-11-26 19:14:34.841254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.773 [2024-11-26 19:14:34.841265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.773 [2024-11-26 19:14:34.841276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.773 [2024-11-26 19:14:34.841402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.773 [2024-11-26 19:14:34.841408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.773 [2024-11-26 19:14:34.841412] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=0 00:24:17.773 [2024-11-26 19:14:34.841421] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786100) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:17.773 [2024-11-26 19:14:34.841426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841434] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841438] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.773 [2024-11-26 19:14:34.841562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.773 [2024-11-26 19:14:34.841566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.773 [2024-11-26 19:14:34.841570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.773 [2024-11-26 19:14:34.841578] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:17.773 [2024-11-26 19:14:34.841583] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:17.773 [2024-11-26 19:14:34.841587] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:17.773 [2024-11-26 19:14:34.841592] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:17.774 [2024-11-26 19:14:34.841596] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:17.774 [2024-11-26 19:14:34.841604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.841613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.841620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.774 [2024-11-26 19:14:34.841645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.774 [2024-11-26 19:14:34.841758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.774 [2024-11-26 19:14:34.841764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.774 [2024-11-26 19:14:34.841767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.774 [2024-11-26 19:14:34.841778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.774 [2024-11-26 19:14:34.841798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.774 [2024-11-26 19:14:34.841818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.774 [2024-11-26 19:14:34.841837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.774 [2024-11-26 19:14:34.841855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.841866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.841873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.841877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.841884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.774 [2024-11-26 19:14:34.841896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:17.774 [2024-11-26 19:14:34.841903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786280, cid 1, qid 0 00:24:17.774 [2024-11-26 19:14:34.841908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786400, cid 2, qid 0 00:24:17.774 [2024-11-26 19:14:34.841913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.774 [2024-11-26 19:14:34.841918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.774 [2024-11-26 19:14:34.842060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.774 [2024-11-26 19:14:34.842066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.774 [2024-11-26 19:14:34.842070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.774 [2024-11-26 19:14:34.842078] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:17.774 [2024-11-26 19:14:34.842083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.842122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:17.774 [2024-11-26 19:14:34.842133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.774 [2024-11-26 19:14:34.842208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.774 [2024-11-26 19:14:34.842215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.774 [2024-11-26 19:14:34.842219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.774 [2024-11-26 19:14:34.842288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.842317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.774 [2024-11-26 19:14:34.842329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.774 [2024-11-26 19:14:34.842418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.774 [2024-11-26 19:14:34.842424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.774 [2024-11-26 19:14:34.842428] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842431] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=4 00:24:17.774 [2024-11-26 19:14:34.842436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:17.774 [2024-11-26 19:14:34.842440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842450] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842453] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.774 [2024-11-26 19:14:34.842529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.774 [2024-11-26 19:14:34.842532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.774 [2024-11-26 19:14:34.842549] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:17.774 [2024-11-26 19:14:34.842568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:17.774 [2024-11-26 19:14:34.842585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.774 [2024-11-26 19:14:34.842589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.774 [2024-11-26 19:14:34.842595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.774 [2024-11-26 19:14:34.842607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.775 [2024-11-26 19:14:34.842720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.775 [2024-11-26 19:14:34.842726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.775 [2024-11-26 19:14:34.842730] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842734] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=4 00:24:17.775 [2024-11-26 19:14:34.842738] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:17.775 [2024-11-26 19:14:34.842742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842749] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842753] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.842822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.842825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.842841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.842851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.842858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.842869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.775 [2024-11-26 19:14:34.842880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.775 [2024-11-26 19:14:34.842972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.775 [2024-11-26 19:14:34.842979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.775 [2024-11-26 19:14:34.842982] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.842988] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=4 00:24:17.775 [2024-11-26 19:14:34.842993] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:17.775 [2024-11-26 19:14:34.842997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843004] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843008] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.843113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843153] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:17.775 [2024-11-26 19:14:34.843163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:17.775 [2024-11-26 19:14:34.843169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:17.775 [2024-11-26 19:14:34.843186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.843197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.775 [2024-11-26 19:14:34.843204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.843218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.775 [2024-11-26 19:14:34.843232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.775 [2024-11-26 19:14:34.843237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:17.775 [2024-11-26 19:14:34.843356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.843376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.843401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.843412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.775 [2024-11-26 19:14:34.843422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:17.775 [2024-11-26 19:14:34.843505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.843528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.843539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.775 [2024-11-26 19:14:34.843549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:17.775 [2024-11-26 19:14:34.843657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843667] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:17.775 [2024-11-26 19:14:34.843680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:17.775 [2024-11-26 19:14:34.843691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.775 [2024-11-26 19:14:34.843701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:17.775 [2024-11-26 19:14:34.843795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.775 [2024-11-26 19:14:34.843801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.775 [2024-11-26 19:14:34.843805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.775 [2024-11-26 19:14:34.843809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:17.776 [2024-11-26 19:14:34.843824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.843829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:17.776 [2024-11-26 19:14:34.843835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.776 [2024-11-26 19:14:34.843843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.843846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:17.776 [2024-11-26 19:14:34.843853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.776 [2024-11-26 19:14:34.843860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.843864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x724690) 00:24:17.776 [2024-11-26 19:14:34.843874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.776 [2024-11-26 19:14:34.843882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.843886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x724690) 00:24:17.776 [2024-11-26 19:14:34.843892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.776 [2024-11-26 19:14:34.843904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:17.776 [2024-11-26 19:14:34.843909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:17.776 [2024-11-26 19:14:34.843914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786a00, cid 6, qid 0 00:24:17.776 [2024-11-26 19:14:34.843918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786b80, cid 7, qid 0 00:24:17.776 [2024-11-26 19:14:34.844085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.776 [2024-11-26 19:14:34.844092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.776 [2024-11-26 19:14:34.844095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.844099] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=8192, cccid=5 00:24:17.776 [2024-11-26 19:14:34.844104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786880) on tqpair(0x724690): expected_datao=0, payload_size=8192 00:24:17.776 [2024-11-26 19:14:34.844108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848168] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.776 [2024-11-26 19:14:34.848186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.776 [2024-11-26 19:14:34.848189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=512, cccid=4 00:24:17.776 [2024-11-26 19:14:34.848197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=512 00:24:17.776 [2024-11-26 19:14:34.848202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848208] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848212] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.776 [2024-11-26 19:14:34.848223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.776 [2024-11-26 19:14:34.848226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=512, cccid=6 00:24:17.776 [2024-11-26 19:14:34.848234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786a00) on tqpair(0x724690): expected_datao=0, payload_size=512 00:24:17.776 [2024-11-26 19:14:34.848239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848249] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:17.776 [2024-11-26 19:14:34.848260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:17.776 [2024-11-26 19:14:34.848263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=7 00:24:17.776 [2024-11-26 19:14:34.848274] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786b80) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:17.776 [2024-11-26 19:14:34.848278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848285] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.776 [2024-11-26 19:14:34.848300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.776 [2024-11-26 19:14:34.848303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:17.776 [2024-11-26 19:14:34.848321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.776 [2024-11-26 19:14:34.848327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.776 [2024-11-26 19:14:34.848330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:17.776 [2024-11-26 19:14:34.848345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.776 [2024-11-26 19:14:34.848351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.776 [2024-11-26 19:14:34.848355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786a00) on tqpair=0x724690 00:24:17.776 [2024-11-26 19:14:34.848366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.776 [2024-11-26 19:14:34.848371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.776 [2024-11-26 19:14:34.848375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.776 [2024-11-26 19:14:34.848379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786b80) on tqpair=0x724690 00:24:17.776 ===================================================== 00:24:17.776 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.776 ===================================================== 00:24:17.776 Controller Capabilities/Features 00:24:17.776 ================================ 00:24:17.776 Vendor ID: 8086 00:24:17.776 Subsystem Vendor ID: 8086 00:24:17.776 Serial Number: SPDK00000000000001 00:24:17.776 Model Number: SPDK bdev Controller 00:24:17.776 Firmware Version: 25.01 00:24:17.776 Recommended Arb Burst: 6 00:24:17.776 IEEE OUI Identifier: e4 d2 5c 00:24:17.776 Multi-path I/O 00:24:17.776 May have multiple subsystem ports: Yes 00:24:17.776 May have multiple controllers: Yes 00:24:17.776 Associated with SR-IOV VF: No 00:24:17.776 Max Data Transfer Size: 131072 00:24:17.776 Max Number of Namespaces: 32 00:24:17.776 Max Number of I/O Queues: 127 00:24:17.776 NVMe Specification Version (VS): 1.3 00:24:17.776 NVMe Specification Version (Identify): 1.3 00:24:17.776 Maximum Queue Entries: 128 00:24:17.776 Contiguous Queues Required: Yes 00:24:17.776 Arbitration Mechanisms Supported 00:24:17.776 Weighted Round Robin: Not Supported 00:24:17.776 Vendor Specific: Not Supported 00:24:17.776 Reset Timeout: 15000 ms 00:24:17.776 Doorbell Stride: 4 bytes 00:24:17.776 NVM Subsystem Reset: Not Supported 00:24:17.776 Command Sets Supported 00:24:17.776 NVM Command Set: Supported 00:24:17.776 Boot Partition: Not Supported 00:24:17.776 Memory Page Size Minimum: 4096 bytes 00:24:17.776 Memory Page Size Maximum: 4096 bytes 00:24:17.776 Persistent Memory Region: Not Supported 00:24:17.776 Optional Asynchronous Events Supported 00:24:17.776 Namespace Attribute Notices: Supported 00:24:17.776 Firmware Activation Notices: Not Supported 00:24:17.776 ANA Change Notices: Not Supported 00:24:17.776 PLE Aggregate Log Change Notices: Not Supported 00:24:17.776 LBA Status Info Alert Notices: Not Supported 00:24:17.776 EGE Aggregate Log Change Notices: Not Supported 00:24:17.776 Normal NVM Subsystem Shutdown event: Not Supported 00:24:17.776 Zone Descriptor Change Notices: Not Supported 00:24:17.776 Discovery Log Change Notices: Not Supported 00:24:17.776 Controller Attributes 00:24:17.776 128-bit Host Identifier: Supported 00:24:17.776 Non-Operational Permissive Mode: Not Supported 00:24:17.776 NVM Sets: Not Supported 00:24:17.776 Read Recovery Levels: Not Supported 00:24:17.776 Endurance Groups: Not Supported 00:24:17.776 Predictable Latency Mode: Not Supported 00:24:17.776 Traffic Based Keep ALive: Not Supported 00:24:17.776 Namespace Granularity: Not Supported 00:24:17.776 SQ Associations: Not Supported 00:24:17.776 UUID List: Not Supported 00:24:17.776 Multi-Domain Subsystem: Not Supported 00:24:17.776 Fixed Capacity Management: Not Supported 00:24:17.776 Variable Capacity Management: Not Supported 00:24:17.776 Delete Endurance Group: Not Supported 00:24:17.777 Delete NVM Set: Not Supported 00:24:17.777 Extended LBA Formats Supported: Not Supported 00:24:17.777 Flexible Data Placement Supported: Not Supported 00:24:17.777 00:24:17.777 Controller Memory Buffer Support 00:24:17.777 ================================ 00:24:17.777 Supported: No 00:24:17.777 00:24:17.777 Persistent Memory Region Support 00:24:17.777 ================================ 00:24:17.777 Supported: No 00:24:17.777 00:24:17.777 Admin Command Set Attributes 00:24:17.777 ============================ 00:24:17.777 Security Send/Receive: Not Supported 00:24:17.777 Format NVM: Not Supported 00:24:17.777 Firmware Activate/Download: Not Supported 00:24:17.777 Namespace Management: Not Supported 00:24:17.777 Device Self-Test: Not Supported 00:24:17.777 Directives: Not Supported 00:24:17.777 NVMe-MI: Not Supported 00:24:17.777 Virtualization Management: Not Supported 00:24:17.777 Doorbell Buffer Config: Not Supported 00:24:17.777 Get LBA Status Capability: Not Supported 00:24:17.777 Command & Feature Lockdown Capability: Not Supported 00:24:17.777 Abort Command Limit: 4 00:24:17.777 Async Event Request Limit: 4 00:24:17.777 Number of Firmware Slots: N/A 00:24:17.777 Firmware Slot 1 Read-Only: N/A 00:24:17.777 Firmware Activation Without Reset: N/A 00:24:17.777 Multiple Update Detection Support: N/A 00:24:17.777 Firmware Update Granularity: No Information Provided 00:24:17.777 Per-Namespace SMART Log: No 00:24:17.777 Asymmetric Namespace Access Log Page: Not Supported 00:24:17.777 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:17.777 Command Effects Log Page: Supported 00:24:17.777 Get Log Page Extended Data: Supported 00:24:17.777 Telemetry Log Pages: Not Supported 00:24:17.777 Persistent Event Log Pages: Not Supported 00:24:17.777 Supported Log Pages Log Page: May Support 00:24:17.777 Commands Supported & Effects Log Page: Not Supported 00:24:17.777 Feature Identifiers & Effects Log Page:May Support 00:24:17.777 NVMe-MI Commands & Effects Log Page: May Support 00:24:17.777 Data Area 4 for Telemetry Log: Not Supported 00:24:17.777 Error Log Page Entries Supported: 128 00:24:17.777 Keep Alive: Supported 00:24:17.777 Keep Alive Granularity: 10000 ms 00:24:17.777 00:24:17.777 NVM Command Set Attributes 00:24:17.777 ========================== 00:24:17.777 Submission Queue Entry Size 00:24:17.777 Max: 64 00:24:17.777 Min: 64 00:24:17.777 Completion Queue Entry Size 00:24:17.777 Max: 16 00:24:17.777 Min: 16 00:24:17.777 Number of Namespaces: 32 00:24:17.777 Compare Command: Supported 00:24:17.777 Write Uncorrectable Command: Not Supported 00:24:17.777 Dataset Management Command: Supported 00:24:17.777 Write Zeroes Command: Supported 00:24:17.777 Set Features Save Field: Not Supported 00:24:17.777 Reservations: Supported 00:24:17.777 Timestamp: Not Supported 00:24:17.777 Copy: Supported 00:24:17.777 Volatile Write Cache: Present 00:24:17.777 Atomic Write Unit (Normal): 1 00:24:17.777 Atomic Write Unit (PFail): 1 00:24:17.777 Atomic Compare & Write Unit: 1 00:24:17.777 Fused Compare & Write: Supported 00:24:17.777 Scatter-Gather List 00:24:17.777 SGL Command Set: Supported 00:24:17.777 SGL Keyed: Supported 00:24:17.777 SGL Bit Bucket Descriptor: Not Supported 00:24:17.777 SGL Metadata Pointer: Not Supported 00:24:17.777 Oversized SGL: Not Supported 00:24:17.777 SGL Metadata Address: Not Supported 00:24:17.777 SGL Offset: Supported 00:24:17.777 Transport SGL Data Block: Not Supported 00:24:17.777 Replay Protected Memory Block: Not Supported 00:24:17.777 00:24:17.777 Firmware Slot Information 00:24:17.777 ========================= 00:24:17.777 Active slot: 1 00:24:17.777 Slot 1 Firmware Revision: 25.01 00:24:17.777 00:24:17.777 00:24:17.777 Commands Supported and Effects 00:24:17.777 ============================== 00:24:17.777 Admin Commands 00:24:17.777 -------------- 00:24:17.777 Get Log Page (02h): Supported 00:24:17.777 Identify (06h): Supported 00:24:17.777 Abort (08h): Supported 00:24:17.777 Set Features (09h): Supported 00:24:17.777 Get Features (0Ah): Supported 00:24:17.777 Asynchronous Event Request (0Ch): Supported 00:24:17.777 Keep Alive (18h): Supported 00:24:17.777 I/O Commands 00:24:17.777 ------------ 00:24:17.777 Flush (00h): Supported LBA-Change 00:24:17.777 Write (01h): Supported LBA-Change 00:24:17.777 Read (02h): Supported 00:24:17.777 Compare (05h): Supported 00:24:17.777 Write Zeroes (08h): Supported LBA-Change 00:24:17.777 Dataset Management (09h): Supported LBA-Change 00:24:17.777 Copy (19h): Supported LBA-Change 00:24:17.777 00:24:17.777 Error Log 00:24:17.777 ========= 00:24:17.777 00:24:17.777 Arbitration 00:24:17.777 =========== 00:24:17.777 Arbitration Burst: 1 00:24:17.777 00:24:17.777 Power Management 00:24:17.777 ================ 00:24:17.777 Number of Power States: 1 00:24:17.777 Current Power State: Power State #0 00:24:17.777 Power State #0: 00:24:17.777 Max Power: 0.00 W 00:24:17.777 Non-Operational State: Operational 00:24:17.777 Entry Latency: Not Reported 00:24:17.777 Exit Latency: Not Reported 00:24:17.777 Relative Read Throughput: 0 00:24:17.777 Relative Read Latency: 0 00:24:17.777 Relative Write Throughput: 0 00:24:17.777 Relative Write Latency: 0 00:24:17.777 Idle Power: Not Reported 00:24:17.777 Active Power: Not Reported 00:24:17.777 Non-Operational Permissive Mode: Not Supported 00:24:17.777 00:24:17.777 Health Information 00:24:17.777 ================== 00:24:17.777 Critical Warnings: 00:24:17.777 Available Spare Space: OK 00:24:17.777 Temperature: OK 00:24:17.777 Device Reliability: OK 00:24:17.777 Read Only: No 00:24:17.777 Volatile Memory Backup: OK 00:24:17.777 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:17.777 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:17.777 Available Spare: 0% 00:24:17.777 Available Spare Threshold: 0% 00:24:17.777 Life Percentage Used:[2024-11-26 19:14:34.848479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.777 [2024-11-26 19:14:34.848485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x724690) 00:24:17.777 [2024-11-26 19:14:34.848492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.777 [2024-11-26 19:14:34.848506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786b80, cid 7, qid 0 00:24:17.778 [2024-11-26 19:14:34.848589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.848595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.848599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786b80) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848639] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:17.778 [2024-11-26 19:14:34.848649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.778 [2024-11-26 19:14:34.848661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786280) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.778 [2024-11-26 19:14:34.848671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786400) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.778 [2024-11-26 19:14:34.848681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.778 [2024-11-26 19:14:34.848698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.848713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.848725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.848807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.848813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.848817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.848842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.848857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.848932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.848938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.848941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.848950] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:17.778 [2024-11-26 19:14:34.848955] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:17.778 [2024-11-26 19:14:34.848965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.848972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.848979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.848989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.849055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.849061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.849065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.849080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.849095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.849105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.849170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.849179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.849183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.849197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.849226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.849237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.849313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.849319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.778 [2024-11-26 19:14:34.849322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.778 [2024-11-26 19:14:34.849336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.778 [2024-11-26 19:14:34.849344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.778 [2024-11-26 19:14:34.849350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.778 [2024-11-26 19:14:34.849362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.778 [2024-11-26 19:14:34.849431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.778 [2024-11-26 19:14:34.849437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.849441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.849455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.849469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.849479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.849553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.849559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.849562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.849576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.849590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.849602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.849664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.849670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.849676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.849691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.849706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.849716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.849788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.849795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.849798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.849812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.849826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.849837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.849907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.849913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.849917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.849931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.849938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.849945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.849955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.850024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.850030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.850033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.850047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.850061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.850073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.850170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.850176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.850180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.850197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.850212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.850223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.850286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.779 [2024-11-26 19:14:34.850292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.779 [2024-11-26 19:14:34.850296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.779 [2024-11-26 19:14:34.850310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.779 [2024-11-26 19:14:34.850317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.779 [2024-11-26 19:14:34.850324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.779 [2024-11-26 19:14:34.850335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.779 [2024-11-26 19:14:34.850404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.850410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.850413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.850427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.850441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.850451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.850521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.850527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.850530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.850544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.850558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.850569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.850635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.850641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.850644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.850659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.850676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.850686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.850755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.850761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.850764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.850778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.850793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.850804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.850873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.850879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.850883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.850896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.850904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.850911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.850921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.850996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.851003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.851006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.851020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.851034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.851046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.851109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.851115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.851119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.851132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.851149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.851164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.780 [2024-11-26 19:14:34.851234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.780 [2024-11-26 19:14:34.851241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.780 [2024-11-26 19:14:34.851244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.780 [2024-11-26 19:14:34.851259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.780 [2024-11-26 19:14:34.851266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.780 [2024-11-26 19:14:34.851273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.780 [2024-11-26 19:14:34.851284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.851942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.851948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.781 [2024-11-26 19:14:34.851952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.781 [2024-11-26 19:14:34.851966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.781 [2024-11-26 19:14:34.851973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.781 [2024-11-26 19:14:34.851980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.781 [2024-11-26 19:14:34.851990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.781 [2024-11-26 19:14:34.852055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.781 [2024-11-26 19:14:34.852062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.782 [2024-11-26 19:14:34.852065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.852069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.782 [2024-11-26 19:14:34.852079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.852083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.852086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.782 [2024-11-26 19:14:34.852093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.782 [2024-11-26 19:14:34.852106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.782 [2024-11-26 19:14:34.856169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.782 [2024-11-26 19:14:34.856177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.782 [2024-11-26 19:14:34.856181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.856185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.782 [2024-11-26 19:14:34.856195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.856200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.856203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:17.782 [2024-11-26 19:14:34.856210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.782 [2024-11-26 19:14:34.856221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:17.782 [2024-11-26 19:14:34.856317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:17.782 [2024-11-26 19:14:34.856323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:17.782 [2024-11-26 19:14:34.856327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:17.782 [2024-11-26 19:14:34.856331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:17.782 [2024-11-26 19:14:34.856339] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:17.782 0% 00:24:17.782 Data Units Read: 0 00:24:17.782 Data Units Written: 0 00:24:17.782 Host Read Commands: 0 00:24:17.782 Host Write Commands: 0 00:24:17.782 Controller Busy Time: 0 minutes 00:24:17.782 Power Cycles: 0 00:24:17.782 Power On Hours: 0 hours 00:24:17.782 Unsafe Shutdowns: 0 00:24:17.782 Unrecoverable Media Errors: 0 00:24:17.782 Lifetime Error Log Entries: 0 00:24:17.782 Warning Temperature Time: 0 minutes 00:24:17.782 Critical Temperature Time: 0 minutes 00:24:17.782 00:24:17.782 Number of Queues 00:24:17.782 ================ 00:24:17.782 Number of I/O Submission Queues: 127 00:24:17.782 Number of I/O Completion Queues: 127 00:24:17.782 00:24:17.782 Active Namespaces 00:24:17.782 ================= 00:24:17.782 Namespace ID:1 00:24:17.782 Error Recovery Timeout: Unlimited 00:24:17.782 Command Set Identifier: NVM (00h) 00:24:17.782 Deallocate: Supported 00:24:17.782 Deallocated/Unwritten Error: Not Supported 00:24:17.782 Deallocated Read Value: Unknown 00:24:17.782 Deallocate in Write Zeroes: Not Supported 00:24:17.782 Deallocated Guard Field: 0xFFFF 00:24:17.782 Flush: Supported 00:24:17.782 Reservation: Supported 00:24:17.782 Namespace Sharing Capabilities: Multiple Controllers 00:24:17.782 Size (in LBAs): 131072 (0GiB) 00:24:17.782 Capacity (in LBAs): 131072 (0GiB) 00:24:17.782 Utilization (in LBAs): 131072 (0GiB) 00:24:17.782 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:17.782 EUI64: ABCDEF0123456789 00:24:17.782 UUID: 545a7a8c-7956-4034-a4f5-2705afd2a705 00:24:17.782 Thin Provisioning: Not Supported 00:24:17.782 Per-NS Atomic Units: Yes 00:24:17.782 Atomic Boundary Size (Normal): 0 00:24:17.782 Atomic Boundary Size (PFail): 0 00:24:17.782 Atomic Boundary Offset: 0 00:24:17.782 Maximum Single Source Range Length: 65535 00:24:17.782 Maximum Copy Length: 65535 00:24:17.782 Maximum Source Range Count: 1 00:24:17.782 NGUID/EUI64 Never Reused: No 00:24:17.782 Namespace Write Protected: No 00:24:17.782 Number of LBA Formats: 1 00:24:17.782 Current LBA Format: LBA Format #00 00:24:17.782 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:17.782 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.782 rmmod nvme_tcp 00:24:17.782 rmmod nvme_fabrics 00:24:17.782 rmmod nvme_keyring 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3035496 ']' 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3035496 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3035496 ']' 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3035496 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:17.782 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.783 19:14:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035496 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035496' 00:24:18.043 killing process with pid 3035496 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3035496 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3035496 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.043 19:14:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.588 00:24:20.588 real 0m11.697s 00:24:20.588 user 0m8.454s 00:24:20.588 sys 0m6.314s 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.588 ************************************ 00:24:20.588 END TEST nvmf_identify 00:24:20.588 ************************************ 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.588 ************************************ 00:24:20.588 START TEST nvmf_perf 00:24:20.588 ************************************ 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:20.588 * Looking for test storage... 00:24:20.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.588 --rc genhtml_branch_coverage=1 00:24:20.588 --rc genhtml_function_coverage=1 00:24:20.588 --rc genhtml_legend=1 00:24:20.588 --rc geninfo_all_blocks=1 00:24:20.588 --rc geninfo_unexecuted_blocks=1 00:24:20.588 00:24:20.588 ' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.588 --rc genhtml_branch_coverage=1 00:24:20.588 --rc genhtml_function_coverage=1 00:24:20.588 --rc genhtml_legend=1 00:24:20.588 --rc geninfo_all_blocks=1 00:24:20.588 --rc geninfo_unexecuted_blocks=1 00:24:20.588 00:24:20.588 ' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.588 --rc genhtml_branch_coverage=1 00:24:20.588 --rc genhtml_function_coverage=1 00:24:20.588 --rc genhtml_legend=1 00:24:20.588 --rc geninfo_all_blocks=1 00:24:20.588 --rc geninfo_unexecuted_blocks=1 00:24:20.588 00:24:20.588 ' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.588 --rc genhtml_branch_coverage=1 00:24:20.588 --rc genhtml_function_coverage=1 00:24:20.588 --rc genhtml_legend=1 00:24:20.588 --rc geninfo_all_blocks=1 00:24:20.588 --rc geninfo_unexecuted_blocks=1 00:24:20.588 00:24:20.588 ' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.588 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.589 19:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:28.733 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.733 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:28.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:28.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:28.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.734 19:14:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:24:28.734 00:24:28.734 --- 10.0.0.2 ping statistics --- 00:24:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.734 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:24:28.734 00:24:28.734 --- 10.0.0.1 ping statistics --- 00:24:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.734 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3040171 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3040171 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3040171 ']' 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.734 19:14:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.734 [2024-11-26 19:14:45.269245] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:28.734 [2024-11-26 19:14:45.269315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.734 [2024-11-26 19:14:45.369326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.734 [2024-11-26 19:14:45.421740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.734 [2024-11-26 19:14:45.421791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.734 [2024-11-26 19:14:45.421800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.734 [2024-11-26 19:14:45.421807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.734 [2024-11-26 19:14:45.421814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.734 [2024-11-26 19:14:45.424207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.734 [2024-11-26 19:14:45.424459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.734 [2024-11-26 19:14:45.424459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.734 [2024-11-26 19:14:45.424294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:28.995 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:29.566 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:29.566 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:29.827 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:29.827 19:14:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:30.087 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:30.087 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:30.087 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:30.087 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:30.087 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:30.087 [2024-11-26 19:14:47.261467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.347 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.347 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.347 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.608 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:30.608 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:30.869 19:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.869 [2024-11-26 19:14:48.052808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.129 19:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.129 19:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:31.129 19:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:31.129 19:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:31.129 19:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:32.511 Initializing NVMe Controllers 00:24:32.511 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:32.511 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:32.511 Initialization complete. Launching workers. 00:24:32.511 ======================================================== 00:24:32.511 Latency(us) 00:24:32.511 Device Information : IOPS MiB/s Average min max 00:24:32.511 PCIE (0000:65:00.0) NSID 1 from core 0: 77798.91 303.90 410.76 13.43 4983.72 00:24:32.511 ======================================================== 00:24:32.511 Total : 77798.91 303.90 410.76 13.43 4983.72 00:24:32.511 00:24:32.511 19:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:33.894 Initializing NVMe Controllers 00:24:33.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.894 Initialization complete. Launching workers. 00:24:33.894 ======================================================== 00:24:33.894 Latency(us) 00:24:33.894 Device Information : IOPS MiB/s Average min max 00:24:33.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13634.19 251.36 45658.40 00:24:33.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23078.39 7959.38 49886.86 00:24:33.894 ======================================================== 00:24:33.894 Total : 120.00 0.47 17175.76 251.36 49886.86 00:24:33.894 00:24:33.894 19:14:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.276 Initializing NVMe Controllers 00:24:35.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:35.276 Initialization complete. Launching workers. 00:24:35.276 ======================================================== 00:24:35.276 Latency(us) 00:24:35.276 Device Information : IOPS MiB/s Average min max 00:24:35.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11826.61 46.20 2705.84 357.88 6786.95 00:24:35.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3787.78 14.80 8534.65 5507.22 47881.08 00:24:35.276 ======================================================== 00:24:35.276 Total : 15614.40 60.99 4119.81 357.88 47881.08 00:24:35.276 00:24:35.276 19:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:35.276 19:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:35.276 19:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.816 Initializing NVMe Controllers 00:24:37.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.816 Controller IO queue size 128, less than required. 00:24:37.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.816 Controller IO queue size 128, less than required. 00:24:37.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.816 Initialization complete. Launching workers. 00:24:37.816 ======================================================== 00:24:37.816 Latency(us) 00:24:37.816 Device Information : IOPS MiB/s Average min max 00:24:37.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1945.59 486.40 66685.46 39500.33 124559.61 00:24:37.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.56 152.89 221819.40 80371.03 330522.27 00:24:37.816 ======================================================== 00:24:37.816 Total : 2557.15 639.29 103786.66 39500.33 330522.27 00:24:37.816 00:24:38.076 19:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:38.337 No valid NVMe controllers or AIO or URING devices found 00:24:38.337 Initializing NVMe Controllers 00:24:38.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.337 Controller IO queue size 128, less than required. 00:24:38.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.337 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:38.337 Controller IO queue size 128, less than required. 00:24:38.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:38.338 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:38.338 WARNING: Some requested NVMe devices were skipped 00:24:38.338 19:14:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:40.891 Initializing NVMe Controllers 00:24:40.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.891 Controller IO queue size 128, less than required. 00:24:40.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.891 Controller IO queue size 128, less than required. 00:24:40.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.891 Initialization complete. Launching workers. 00:24:40.891 00:24:40.891 ==================== 00:24:40.891 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:40.891 TCP transport: 00:24:40.891 polls: 36326 00:24:40.891 idle_polls: 18910 00:24:40.891 sock_completions: 17416 00:24:40.891 nvme_completions: 8223 00:24:40.891 submitted_requests: 12382 00:24:40.891 queued_requests: 1 00:24:40.891 00:24:40.891 ==================== 00:24:40.891 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:40.891 TCP transport: 00:24:40.891 polls: 38286 00:24:40.891 idle_polls: 22905 00:24:40.891 sock_completions: 15381 00:24:40.891 nvme_completions: 7189 00:24:40.891 submitted_requests: 10714 00:24:40.891 queued_requests: 1 00:24:40.891 ======================================================== 00:24:40.891 Latency(us) 00:24:40.891 Device Information : IOPS MiB/s Average min max 00:24:40.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2055.20 513.80 62787.50 31828.14 101000.92 00:24:40.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1796.74 449.19 72060.41 27795.11 115760.73 00:24:40.891 ======================================================== 00:24:40.891 Total : 3851.94 962.99 67112.85 27795.11 115760.73 00:24:40.891 00:24:40.891 19:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:40.891 19:14:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.891 rmmod nvme_tcp 00:24:40.891 rmmod nvme_fabrics 00:24:40.891 rmmod nvme_keyring 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3040171 ']' 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3040171 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3040171 ']' 00:24:40.891 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3040171 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040171 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040171' 00:24:41.152 killing process with pid 3040171 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3040171 00:24:41.152 19:14:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3040171 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.067 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.068 19:15:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.613 00:24:45.613 real 0m24.823s 00:24:45.613 user 1m0.289s 00:24:45.613 sys 0m8.756s 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:45.613 ************************************ 00:24:45.613 END TEST nvmf_perf 00:24:45.613 ************************************ 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.613 ************************************ 00:24:45.613 START TEST nvmf_fio_host 00:24:45.613 ************************************ 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:45.613 * Looking for test storage... 00:24:45.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.613 --rc genhtml_branch_coverage=1 00:24:45.613 --rc genhtml_function_coverage=1 00:24:45.613 --rc genhtml_legend=1 00:24:45.613 --rc geninfo_all_blocks=1 00:24:45.613 --rc geninfo_unexecuted_blocks=1 00:24:45.613 00:24:45.613 ' 00:24:45.613 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.614 --rc genhtml_branch_coverage=1 00:24:45.614 --rc genhtml_function_coverage=1 00:24:45.614 --rc genhtml_legend=1 00:24:45.614 --rc geninfo_all_blocks=1 00:24:45.614 --rc geninfo_unexecuted_blocks=1 00:24:45.614 00:24:45.614 ' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.614 --rc genhtml_branch_coverage=1 00:24:45.614 --rc genhtml_function_coverage=1 00:24:45.614 --rc genhtml_legend=1 00:24:45.614 --rc geninfo_all_blocks=1 00:24:45.614 --rc geninfo_unexecuted_blocks=1 00:24:45.614 00:24:45.614 ' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.614 --rc genhtml_branch_coverage=1 00:24:45.614 --rc genhtml_function_coverage=1 00:24:45.614 --rc genhtml_legend=1 00:24:45.614 --rc geninfo_all_blocks=1 00:24:45.614 --rc geninfo_unexecuted_blocks=1 00:24:45.614 00:24:45.614 ' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.614 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.615 19:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.754 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:53.755 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:53.755 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:53.755 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:53.755 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.755 19:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:24:53.755 00:24:53.755 --- 10.0.0.2 ping statistics --- 00:24:53.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.755 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:24:53.755 00:24:53.755 --- 10.0.0.1 ping statistics --- 00:24:53.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.755 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3047591 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3047591 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3047591 ']' 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.755 19:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.755 [2024-11-26 19:15:10.188329] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:24:53.755 [2024-11-26 19:15:10.188401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.755 [2024-11-26 19:15:10.289037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.755 [2024-11-26 19:15:10.343093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.755 [2024-11-26 19:15:10.343149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.755 [2024-11-26 19:15:10.343167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.755 [2024-11-26 19:15:10.343175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.756 [2024-11-26 19:15:10.343181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.756 [2024-11-26 19:15:10.345221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.756 [2024-11-26 19:15:10.345459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.756 [2024-11-26 19:15:10.345459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.756 [2024-11-26 19:15:10.345297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:54.015 [2024-11-26 19:15:11.184388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.015 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.275 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:54.275 Malloc1 00:24:54.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.535 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:54.796 19:15:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.057 [2024-11-26 19:15:12.061083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.057 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:55.318 19:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:55.579 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:55.579 fio-3.35 00:24:55.579 Starting 1 thread 00:24:58.141 00:24:58.141 test: (groupid=0, jobs=1): err= 0: pid=3048352: Tue Nov 26 19:15:15 2024 00:24:58.141 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(105MiB/2004msec) 00:24:58.141 slat (usec): min=2, max=289, avg= 2.20, stdev= 2.50 00:24:58.141 clat (usec): min=3828, max=9016, avg=5264.79, stdev=583.08 00:24:58.141 lat (usec): min=3830, max=9023, avg=5266.99, stdev=583.27 00:24:58.141 clat percentiles (usec): 00:24:58.141 | 1.00th=[ 4424], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:24:58.141 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:24:58.141 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5932], 00:24:58.141 | 99.00th=[ 8094], 99.50th=[ 8291], 99.90th=[ 8848], 99.95th=[ 8848], 00:24:58.141 | 99.99th=[ 8979] 00:24:58.141 bw ( KiB/s): min=47984, max=55512, per=99.95%, avg=53492.00, stdev=3677.24, samples=4 00:24:58.141 iops : min=11996, max=13878, avg=13373.00, stdev=919.31, samples=4 00:24:58.141 write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(105MiB/2004msec); 0 zone resets 00:24:58.141 slat (usec): min=2, max=276, avg= 2.29, stdev= 1.88 00:24:58.141 clat (usec): min=2991, max=7976, avg=4254.71, stdev=489.14 00:24:58.141 lat (usec): min=3009, max=7978, avg=4257.00, stdev=489.39 00:24:58.141 clat percentiles (usec): 00:24:58.141 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:24:58.141 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:24:58.141 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4817], 00:24:58.141 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7373], 00:24:58.141 | 99.99th=[ 7898] 00:24:58.141 bw ( KiB/s): min=48648, max=55544, per=99.98%, avg=53458.00, stdev=3230.53, samples=4 00:24:58.141 iops : min=12162, max=13886, avg=13364.50, stdev=807.63, samples=4 00:24:58.141 lat (msec) : 4=12.57%, 10=87.43% 00:24:58.141 cpu : usr=74.39%, sys=24.21%, ctx=30, majf=0, minf=16 00:24:58.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:58.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:58.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:58.141 issued rwts: total=26814,26788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:58.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:58.141 00:24:58.141 Run status group 0 (all jobs): 00:24:58.141 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2004-2004msec 00:24:58.141 WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=105MiB (110MB), run=2004-2004msec 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.141 19:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:58.404 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:58.404 fio-3.35 00:24:58.404 Starting 1 thread 00:25:00.945 [2024-11-26 19:15:17.685008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e25580 is same with the state(6) to be set 00:25:00.945 00:25:00.946 test: (groupid=0, jobs=1): err= 0: pid=3049172: Tue Nov 26 19:15:17 2024 00:25:00.946 read: IOPS=9491, BW=148MiB/s (156MB/s)(298MiB/2006msec) 00:25:00.946 slat (usec): min=3, max=111, avg= 3.62, stdev= 1.58 00:25:00.946 clat (usec): min=1357, max=15940, avg=8302.76, stdev=1892.90 00:25:00.946 lat (usec): min=1360, max=15944, avg=8306.39, stdev=1893.01 00:25:00.946 clat percentiles (usec): 00:25:00.946 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6587], 00:25:00.946 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8848], 00:25:00.946 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:25:00.946 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13698], 99.95th=[13829], 00:25:00.946 | 99.99th=[14222] 00:25:00.946 bw ( KiB/s): min=66304, max=83712, per=49.32%, avg=74896.50, stdev=7822.72, samples=4 00:25:00.946 iops : min= 4144, max= 5232, avg=4681.00, stdev=488.90, samples=4 00:25:00.946 write: IOPS=5581, BW=87.2MiB/s (91.4MB/s)(153MiB/1759msec); 0 zone resets 00:25:00.946 slat (usec): min=39, max=329, avg=40.87, stdev= 6.97 00:25:00.946 clat (usec): min=2171, max=14408, avg=9079.43, stdev=1330.03 00:25:00.946 lat (usec): min=2211, max=14563, avg=9120.30, stdev=1331.36 00:25:00.946 clat percentiles (usec): 00:25:00.946 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 7963], 00:25:00.946 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:25:00.946 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:25:00.946 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13960], 99.95th=[14091], 00:25:00.946 | 99.99th=[14353] 00:25:00.946 bw ( KiB/s): min=69344, max=86304, per=87.38%, avg=78031.00, stdev=7499.89, samples=4 00:25:00.946 iops : min= 4334, max= 5394, avg=4876.75, stdev=468.62, samples=4 00:25:00.946 lat (msec) : 2=0.04%, 4=0.52%, 10=77.07%, 20=22.37% 00:25:00.946 cpu : usr=84.64%, sys=13.82%, ctx=14, majf=0, minf=36 00:25:00.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:00.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:00.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:00.946 issued rwts: total=19040,9817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:00.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:00.946 00:25:00.946 Run status group 0 (all jobs): 00:25:00.946 READ: bw=148MiB/s (156MB/s), 148MiB/s-148MiB/s (156MB/s-156MB/s), io=298MiB (312MB), run=2006-2006msec 00:25:00.946 WRITE: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=153MiB (161MB), run=1759-1759msec 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.946 rmmod nvme_tcp 00:25:00.946 rmmod nvme_fabrics 00:25:00.946 rmmod nvme_keyring 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3047591 ']' 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3047591 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3047591 ']' 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3047591 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.946 19:15:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047591 00:25:00.946 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.946 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.946 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047591' 00:25:00.946 killing process with pid 3047591 00:25:00.946 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3047591 00:25:00.946 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3047591 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.206 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.207 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.207 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.207 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.207 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.207 19:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.134 00:25:03.134 real 0m17.953s 00:25:03.134 user 0m59.583s 00:25:03.134 sys 0m7.686s 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.134 ************************************ 00:25:03.134 END TEST nvmf_fio_host 00:25:03.134 ************************************ 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.134 ************************************ 00:25:03.134 START TEST nvmf_failover 00:25:03.134 ************************************ 00:25:03.134 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:03.396 * Looking for test storage... 00:25:03.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.396 --rc genhtml_branch_coverage=1 00:25:03.396 --rc genhtml_function_coverage=1 00:25:03.396 --rc genhtml_legend=1 00:25:03.396 --rc geninfo_all_blocks=1 00:25:03.396 --rc geninfo_unexecuted_blocks=1 00:25:03.396 00:25:03.396 ' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.396 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.397 19:15:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:11.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:11.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:11.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:11.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:11.717 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.718 19:15:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:11.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:25:11.718 00:25:11.718 --- 10.0.0.2 ping statistics --- 00:25:11.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.718 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:11.718 00:25:11.718 --- 10.0.0.1 ping statistics --- 00:25:11.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.718 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3054244 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3054244 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3054244 ']' 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.718 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.718 [2024-11-26 19:15:28.136399] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:25:11.718 [2024-11-26 19:15:28.136471] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.718 [2024-11-26 19:15:28.220960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.718 [2024-11-26 19:15:28.272640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.718 [2024-11-26 19:15:28.272691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.718 [2024-11-26 19:15:28.272700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.718 [2024-11-26 19:15:28.272708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.718 [2024-11-26 19:15:28.272714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.718 [2024-11-26 19:15:28.274709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.718 [2024-11-26 19:15:28.274877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.718 [2024-11-26 19:15:28.274877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.980 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.980 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:11.980 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.980 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.980 19:15:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.980 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.980 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:11.980 [2024-11-26 19:15:29.173384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.241 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:12.241 Malloc0 00:25:12.241 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.503 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.765 19:15:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.025 [2024-11-26 19:15:30.007264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.025 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.025 [2024-11-26 19:15:30.207859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.287 [2024-11-26 19:15:30.404553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3054615 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3054615 /var/tmp/bdevperf.sock 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3054615 ']' 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.287 19:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.226 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.226 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.226 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.487 NVMe0n1 00:25:14.487 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:14.747 00:25:14.747 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.747 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3054953 00:25:14.747 19:15:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:15.687 19:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.948 [2024-11-26 19:15:33.038239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.948 [2024-11-26 19:15:33.038319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.949 [2024-11-26 19:15:33.038651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.950 [2024-11-26 19:15:33.038655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4ed0 is same with the state(6) to be set 00:25:15.950 19:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:19.245 19:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.506 00:25:19.506 19:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.506 [2024-11-26 19:15:36.652995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 [2024-11-26 19:15:36.653216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5cf0 is same with the state(6) to be set 00:25:19.506 19:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:22.812 19:15:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.812 [2024-11-26 19:15:39.845073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.812 19:15:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:23.752 19:15:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:24.012 [2024-11-26 19:15:41.036616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 [2024-11-26 19:15:41.036702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b6bf0 is same with the state(6) to be set 00:25:24.012 19:15:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3054953 00:25:30.596 { 00:25:30.596 "results": [ 00:25:30.596 { 00:25:30.596 "job": "NVMe0n1", 00:25:30.596 "core_mask": "0x1", 00:25:30.596 "workload": "verify", 00:25:30.596 "status": "finished", 00:25:30.596 "verify_range": { 00:25:30.596 "start": 0, 00:25:30.596 "length": 16384 00:25:30.596 }, 00:25:30.596 "queue_depth": 128, 00:25:30.596 "io_size": 4096, 00:25:30.596 "runtime": 15.007151, 00:25:30.596 "iops": 12366.371205300726, 00:25:30.596 "mibps": 48.30613752070596, 00:25:30.596 "io_failed": 10181, 00:25:30.596 "io_timeout": 0, 00:25:30.596 "avg_latency_us": 9790.987241777982, 00:25:30.596 "min_latency_us": 556.3733333333333, 00:25:30.596 "max_latency_us": 21736.106666666667 00:25:30.596 } 00:25:30.596 ], 00:25:30.596 "core_count": 1 00:25:30.596 } 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3054615 ']' 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054615' 00:25:30.596 killing process with pid 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3054615 00:25:30.596 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.596 [2024-11-26 19:15:30.492427] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:25:30.596 [2024-11-26 19:15:30.492503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054615 ] 00:25:30.596 [2024-11-26 19:15:30.582236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.596 [2024-11-26 19:15:30.617917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.596 Running I/O for 15 seconds... 00:25:30.596 11071.00 IOPS, 43.25 MiB/s [2024-11-26T18:15:47.809Z] [2024-11-26 19:15:33.042075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.596 [2024-11-26 19:15:33.042422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.596 [2024-11-26 19:15:33.042575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.596 [2024-11-26 19:15:33.042584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.042986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.597 [2024-11-26 19:15:33.043214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.597 [2024-11-26 19:15:33.043260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.597 [2024-11-26 19:15:33.043267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.598 [2024-11-26 19:15:33.043782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:25:30.598 [2024-11-26 19:15:33.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.598 [2024-11-26 19:15:33.043967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.598 [2024-11-26 19:15:33.043972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.598 [2024-11-26 19:15:33.043978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.043992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.043998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.044492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.044498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.044505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.044513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.055821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.055850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.055862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.055874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.055880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.055886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.055894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.055902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.055907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.055913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.055928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.055934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.599 [2024-11-26 19:15:33.055940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:25:30.599 [2024-11-26 19:15:33.055947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.599 [2024-11-26 19:15:33.055954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.599 [2024-11-26 19:15:33.055960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.600 [2024-11-26 19:15:33.055966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:25:30.600 [2024-11-26 19:15:33.055973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.055981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.600 [2024-11-26 19:15:33.055986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.600 [2024-11-26 19:15:33.055992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:25:30.600 [2024-11-26 19:15:33.056000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.056044] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:30.600 [2024-11-26 19:15:33.056074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.600 [2024-11-26 19:15:33.056085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.056100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.600 [2024-11-26 19:15:33.056108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.056117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.600 [2024-11-26 19:15:33.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.056133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.600 [2024-11-26 19:15:33.056140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:33.056148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:30.600 [2024-11-26 19:15:33.056194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc7da0 (9): Bad file descriptor 00:25:30.600 [2024-11-26 19:15:33.059843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:30.600 [2024-11-26 19:15:33.126044] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:30.600 10957.00 IOPS, 42.80 MiB/s [2024-11-26T18:15:47.813Z] 11031.33 IOPS, 43.09 MiB/s [2024-11-26T18:15:47.813Z] 11344.75 IOPS, 44.32 MiB/s [2024-11-26T18:15:47.813Z] [2024-11-26 19:15:36.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.600 [2024-11-26 19:15:36.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.600 [2024-11-26 19:15:36.653901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.600 [2024-11-26 19:15:36.653908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.653972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.653984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.653991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.653996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.601 [2024-11-26 19:15:36.654369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.654380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.601 [2024-11-26 19:15:36.654387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.601 [2024-11-26 19:15:36.654392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.602 [2024-11-26 19:15:36.654864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.602 [2024-11-26 19:15:36.654870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:36.654967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.654979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.654991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.654997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:36.655155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.603 [2024-11-26 19:15:36.655180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.603 [2024-11-26 19:15:36.655185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64592 len:8 PRP1 0x0 PRP2 0x0 00:25:30.603 [2024-11-26 19:15:36.655192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655228] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:30.603 [2024-11-26 19:15:36.655244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.603 [2024-11-26 19:15:36.655250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.603 [2024-11-26 19:15:36.655261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.603 [2024-11-26 19:15:36.655271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.603 [2024-11-26 19:15:36.655282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:36.655288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:30.603 [2024-11-26 19:15:36.657970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:30.603 [2024-11-26 19:15:36.657995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc7da0 (9): Bad file descriptor 00:25:30.603 [2024-11-26 19:15:36.768048] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:30.603 11390.60 IOPS, 44.49 MiB/s [2024-11-26T18:15:47.816Z] 11636.00 IOPS, 45.45 MiB/s [2024-11-26T18:15:47.816Z] 11816.29 IOPS, 46.16 MiB/s [2024-11-26T18:15:47.816Z] 11960.88 IOPS, 46.72 MiB/s [2024-11-26T18:15:47.816Z] 12081.78 IOPS, 47.19 MiB/s [2024-11-26T18:15:47.816Z] [2024-11-26 19:15:41.038066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.603 [2024-11-26 19:15:41.038095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.603 [2024-11-26 19:15:41.038182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.603 [2024-11-26 19:15:41.038187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.604 [2024-11-26 19:15:41.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.604 [2024-11-26 19:15:41.038652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.604 [2024-11-26 19:15:41.038658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.605 [2024-11-26 19:15:41.038934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.038989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.038994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.605 [2024-11-26 19:15:41.039113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.605 [2024-11-26 19:15:41.039120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.606 [2024-11-26 19:15:41.039303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.606 [2024-11-26 19:15:41.039316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29352 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29360 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29368 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29376 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29384 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29392 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29400 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29408 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29416 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29424 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29432 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.606 [2024-11-26 19:15:41.039570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29448 len:8 PRP1 0x0 PRP2 0x0 00:25:30.606 [2024-11-26 19:15:41.039575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.606 [2024-11-26 19:15:41.039580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.606 [2024-11-26 19:15:41.039585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29456 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29464 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29472 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29480 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29488 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29496 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29504 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29512 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29520 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30232 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29528 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29536 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29544 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.039824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.039829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.039833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29552 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.050360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:30.607 [2024-11-26 19:15:41.050401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:30.607 [2024-11-26 19:15:41.050408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29560 len:8 PRP1 0x0 PRP2 0x0 00:25:30.607 [2024-11-26 19:15:41.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050459] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:30.607 [2024-11-26 19:15:41.050489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.607 [2024-11-26 19:15:41.050501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.607 [2024-11-26 19:15:41.050526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.607 [2024-11-26 19:15:41.050542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.607 [2024-11-26 19:15:41.050557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.607 [2024-11-26 19:15:41.050568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:30.607 [2024-11-26 19:15:41.050607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc7da0 (9): Bad file descriptor 00:25:30.607 [2024-11-26 19:15:41.054039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:30.607 [2024-11-26 19:15:41.076564] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:30.607 12097.10 IOPS, 47.25 MiB/s [2024-11-26T18:15:47.820Z] 12178.64 IOPS, 47.57 MiB/s [2024-11-26T18:15:47.820Z] 12241.83 IOPS, 47.82 MiB/s [2024-11-26T18:15:47.821Z] 12277.46 IOPS, 47.96 MiB/s [2024-11-26T18:15:47.821Z] 12317.93 IOPS, 48.12 MiB/s [2024-11-26T18:15:47.821Z] 12367.60 IOPS, 48.31 MiB/s 00:25:30.608 Latency(us) 00:25:30.608 [2024-11-26T18:15:47.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.608 Verification LBA range: start 0x0 length 0x4000 00:25:30.608 NVMe0n1 : 15.01 12366.37 48.31 678.41 0.00 9790.99 556.37 21736.11 00:25:30.608 [2024-11-26T18:15:47.821Z] =================================================================================================================== 00:25:30.608 [2024-11-26T18:15:47.821Z] Total : 12366.37 48.31 678.41 0.00 9790.99 556.37 21736.11 00:25:30.608 Received shutdown signal, test time was about 15.000000 seconds 00:25:30.608 00:25:30.608 Latency(us) 00:25:30.608 [2024-11-26T18:15:47.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.608 [2024-11-26T18:15:47.821Z] =================================================================================================================== 00:25:30.608 [2024-11-26T18:15:47.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3058168 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3058168 /var/tmp/bdevperf.sock 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3058168 ']' 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.608 19:15:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:30.867 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.867 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:30.867 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.127 [2024-11-26 19:15:48.213503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.127 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:31.388 [2024-11-26 19:15:48.393942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:31.388 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:31.648 NVMe0n1 00:25:31.648 19:15:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:31.908 00:25:31.908 19:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:32.480 00:25:32.480 19:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.480 19:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:32.480 19:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.740 19:15:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:36.041 19:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.041 19:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:36.041 19:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.041 19:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3059505 00:25:36.041 19:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3059505 00:25:36.981 { 00:25:36.981 "results": [ 00:25:36.981 { 00:25:36.981 "job": "NVMe0n1", 00:25:36.981 "core_mask": "0x1", 00:25:36.981 "workload": "verify", 00:25:36.981 "status": "finished", 00:25:36.981 "verify_range": { 00:25:36.981 "start": 0, 00:25:36.981 "length": 16384 00:25:36.981 }, 00:25:36.981 "queue_depth": 128, 00:25:36.981 "io_size": 4096, 00:25:36.981 "runtime": 1.007136, 00:25:36.981 "iops": 12850.300257363455, 00:25:36.981 "mibps": 50.196485380325996, 00:25:36.981 "io_failed": 0, 00:25:36.981 "io_timeout": 0, 00:25:36.981 "avg_latency_us": 9927.400698501006, 00:25:36.981 "min_latency_us": 2116.266666666667, 00:25:36.981 "max_latency_us": 8465.066666666668 00:25:36.981 } 00:25:36.981 ], 00:25:36.981 "core_count": 1 00:25:36.981 } 00:25:36.981 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.981 [2024-11-26 19:15:47.257983] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:25:36.981 [2024-11-26 19:15:47.258036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058168 ] 00:25:36.981 [2024-11-26 19:15:47.342067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.981 [2024-11-26 19:15:47.371403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.981 [2024-11-26 19:15:49.791720] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:36.981 [2024-11-26 19:15:49.791761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.981 [2024-11-26 19:15:49.791769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.981 [2024-11-26 19:15:49.791776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.981 [2024-11-26 19:15:49.791782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.981 [2024-11-26 19:15:49.791788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.981 [2024-11-26 19:15:49.791793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.981 [2024-11-26 19:15:49.791799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:36.981 [2024-11-26 19:15:49.791804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:36.981 [2024-11-26 19:15:49.791810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:36.981 [2024-11-26 19:15:49.791832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:36.981 [2024-11-26 19:15:49.791843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c36da0 (9): Bad file descriptor 00:25:36.981 [2024-11-26 19:15:49.802436] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:36.981 Running I/O for 1 seconds... 00:25:36.981 12814.00 IOPS, 50.05 MiB/s 00:25:36.981 Latency(us) 00:25:36.981 [2024-11-26T18:15:54.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.981 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:36.981 Verification LBA range: start 0x0 length 0x4000 00:25:36.981 NVMe0n1 : 1.01 12850.30 50.20 0.00 0.00 9927.40 2116.27 8465.07 00:25:36.981 [2024-11-26T18:15:54.194Z] =================================================================================================================== 00:25:36.981 [2024-11-26T18:15:54.194Z] Total : 12850.30 50.20 0.00 0.00 9927.40 2116.27 8465.07 00:25:36.981 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.981 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:37.243 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.503 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.503 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:37.503 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.763 19:15:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:41.062 19:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.062 19:15:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3058168 ']' 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058168' 00:25:41.062 killing process with pid 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3058168 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:41.062 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.323 rmmod nvme_tcp 00:25:41.323 rmmod nvme_fabrics 00:25:41.323 rmmod nvme_keyring 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3054244 ']' 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3054244 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3054244 ']' 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3054244 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.323 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054244 00:25:41.583 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:41.583 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:41.583 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054244' 00:25:41.583 killing process with pid 3054244 00:25:41.583 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3054244 00:25:41.583 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3054244 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.584 19:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.126 00:25:44.126 real 0m40.420s 00:25:44.126 user 2m4.318s 00:25:44.126 sys 0m8.726s 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:44.126 ************************************ 00:25:44.126 END TEST nvmf_failover 00:25:44.126 ************************************ 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.126 ************************************ 00:25:44.126 START TEST nvmf_host_discovery 00:25:44.126 ************************************ 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:44.126 * Looking for test storage... 00:25:44.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:44.126 19:16:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.126 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:44.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.127 --rc genhtml_branch_coverage=1 00:25:44.127 --rc genhtml_function_coverage=1 00:25:44.127 --rc genhtml_legend=1 00:25:44.127 --rc geninfo_all_blocks=1 00:25:44.127 --rc geninfo_unexecuted_blocks=1 00:25:44.127 00:25:44.127 ' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:44.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.127 --rc genhtml_branch_coverage=1 00:25:44.127 --rc genhtml_function_coverage=1 00:25:44.127 --rc genhtml_legend=1 00:25:44.127 --rc geninfo_all_blocks=1 00:25:44.127 --rc geninfo_unexecuted_blocks=1 00:25:44.127 00:25:44.127 ' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:44.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.127 --rc genhtml_branch_coverage=1 00:25:44.127 --rc genhtml_function_coverage=1 00:25:44.127 --rc genhtml_legend=1 00:25:44.127 --rc geninfo_all_blocks=1 00:25:44.127 --rc geninfo_unexecuted_blocks=1 00:25:44.127 00:25:44.127 ' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:44.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.127 --rc genhtml_branch_coverage=1 00:25:44.127 --rc genhtml_function_coverage=1 00:25:44.127 --rc genhtml_legend=1 00:25:44.127 --rc geninfo_all_blocks=1 00:25:44.127 --rc geninfo_unexecuted_blocks=1 00:25:44.127 00:25:44.127 ' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.127 19:16:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:52.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:52.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:52.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:52.271 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.271 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:25:52.272 00:25:52.272 --- 10.0.0.2 ping statistics --- 00:25:52.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.272 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:52.272 00:25:52.272 --- 10.0.0.1 ping statistics --- 00:25:52.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.272 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3065135 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3065135 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3065135 ']' 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.272 19:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.272 [2024-11-26 19:16:08.707328] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:25:52.272 [2024-11-26 19:16:08.707394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.272 [2024-11-26 19:16:08.807527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.272 [2024-11-26 19:16:08.857851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.272 [2024-11-26 19:16:08.857900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.272 [2024-11-26 19:16:08.857909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.272 [2024-11-26 19:16:08.857916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.272 [2024-11-26 19:16:08.857923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.272 [2024-11-26 19:16:08.858723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 [2024-11-26 19:16:09.571517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 [2024-11-26 19:16:09.583772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 null0 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 null1 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3065202 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3065202 /tmp/host.sock 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3065202 ']' 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:52.533 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.533 19:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.533 [2024-11-26 19:16:09.682177] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:25:52.533 [2024-11-26 19:16:09.682244] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065202 ] 00:25:52.794 [2024-11-26 19:16:09.774649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.794 [2024-11-26 19:16:09.827662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.366 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.627 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.628 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.889 [2024-11-26 19:16:10.875173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.889 19:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.889 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.889 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.889 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:53.889 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:53.890 19:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:54.461 [2024-11-26 19:16:11.582416] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.461 [2024-11-26 19:16:11.582450] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.462 [2024-11-26 19:16:11.582466] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.462 [2024-11-26 19:16:11.670789] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:54.722 [2024-11-26 19:16:11.894622] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:54.723 [2024-11-26 19:16:11.896024] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe3d7f0:1 started. 00:25:54.723 [2024-11-26 19:16:11.898066] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:54.723 [2024-11-26 19:16:11.898098] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.723 [2024-11-26 19:16:11.900547] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe3d7f0 was disconnected and freed. delete nvme_qpair. 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:54.982 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:54.983 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.244 [2024-11-26 19:16:12.328912] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe3d9d0:1 started. 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 [2024-11-26 19:16:12.332434] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe3d9d0 was disconnected and freed. delete nvme_qpair. 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:55.244 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.245 [2024-11-26 19:16:12.431184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:55.245 [2024-11-26 19:16:12.431306] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:55.245 [2024-11-26 19:16:12.431329] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.245 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.506 [2024-11-26 19:16:12.518583] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:55.506 19:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:55.506 [2024-11-26 19:16:12.617431] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:55.506 [2024-11-26 19:16:12.617469] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.506 [2024-11-26 19:16:12.617478] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.506 [2024-11-26 19:16:12.617483] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.447 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.709 [2024-11-26 19:16:13.706627] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:56.709 [2024-11-26 19:16:13.706649] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.709 [2024-11-26 19:16:13.709080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.709 [2024-11-26 19:16:13.709099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.709 [2024-11-26 19:16:13.709109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.709 [2024-11-26 19:16:13.709117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.709 [2024-11-26 19:16:13.709125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.709 [2024-11-26 19:16:13.709133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.709 [2024-11-26 19:16:13.709141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.709 [2024-11-26 19:16:13.709149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.709 [2024-11-26 19:16:13.709156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.709 [2024-11-26 19:16:13.719092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.709 [2024-11-26 19:16:13.729128] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.709 [2024-11-26 19:16:13.729142] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.709 [2024-11-26 19:16:13.729147] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.709 [2024-11-26 19:16:13.729152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.709 [2024-11-26 19:16:13.729175] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.709 [2024-11-26 19:16:13.729519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.709 [2024-11-26 19:16:13.729535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.709 [2024-11-26 19:16:13.729544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.709 [2024-11-26 19:16:13.729556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.709 [2024-11-26 19:16:13.729567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.709 [2024-11-26 19:16:13.729574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.709 [2024-11-26 19:16:13.729582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.709 [2024-11-26 19:16:13.729589] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.709 [2024-11-26 19:16:13.729595] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.709 [2024-11-26 19:16:13.729600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.709 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.710 [2024-11-26 19:16:13.739205] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.710 [2024-11-26 19:16:13.739217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.710 [2024-11-26 19:16:13.739226] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.739230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.710 [2024-11-26 19:16:13.739245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.739577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.710 [2024-11-26 19:16:13.739590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.710 [2024-11-26 19:16:13.739597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.710 [2024-11-26 19:16:13.739608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.710 [2024-11-26 19:16:13.739619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.710 [2024-11-26 19:16:13.739626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.710 [2024-11-26 19:16:13.739633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.710 [2024-11-26 19:16:13.739639] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.710 [2024-11-26 19:16:13.739643] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.710 [2024-11-26 19:16:13.739648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.710 [2024-11-26 19:16:13.749276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.710 [2024-11-26 19:16:13.749289] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.710 [2024-11-26 19:16:13.749294] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.749299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.710 [2024-11-26 19:16:13.749314] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.749654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.710 [2024-11-26 19:16:13.749668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.710 [2024-11-26 19:16:13.749676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.710 [2024-11-26 19:16:13.749687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.710 [2024-11-26 19:16:13.749697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.710 [2024-11-26 19:16:13.749705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.710 [2024-11-26 19:16:13.749712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.710 [2024-11-26 19:16:13.749718] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.710 [2024-11-26 19:16:13.749723] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.710 [2024-11-26 19:16:13.749727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.710 [2024-11-26 19:16:13.759345] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.710 [2024-11-26 19:16:13.759360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.710 [2024-11-26 19:16:13.759365] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.759370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.710 [2024-11-26 19:16:13.759384] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.759684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.710 [2024-11-26 19:16:13.759697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.710 [2024-11-26 19:16:13.759704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.710 [2024-11-26 19:16:13.759717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.710 [2024-11-26 19:16:13.759727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.710 [2024-11-26 19:16:13.759734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.710 [2024-11-26 19:16:13.759742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.710 [2024-11-26 19:16:13.759748] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.710 [2024-11-26 19:16:13.759752] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.710 [2024-11-26 19:16:13.759757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.710 [2024-11-26 19:16:13.769417] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.710 [2024-11-26 19:16:13.769428] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.710 [2024-11-26 19:16:13.769433] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.769437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.710 [2024-11-26 19:16:13.769452] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.710 [2024-11-26 19:16:13.769771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.710 [2024-11-26 19:16:13.769783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.710 [2024-11-26 19:16:13.769791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.710 [2024-11-26 19:16:13.769802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.710 [2024-11-26 19:16:13.769815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.710 [2024-11-26 19:16:13.769822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.710 [2024-11-26 19:16:13.769829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.710 [2024-11-26 19:16:13.769835] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.710 [2024-11-26 19:16:13.769839] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.710 [2024-11-26 19:16:13.769844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.710 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.710 [2024-11-26 19:16:13.779483] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.710 [2024-11-26 19:16:13.779497] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.710 [2024-11-26 19:16:13.779502] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.779507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.711 [2024-11-26 19:16:13.779522] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.779819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.711 [2024-11-26 19:16:13.779832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.711 [2024-11-26 19:16:13.779840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.711 [2024-11-26 19:16:13.779851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.711 [2024-11-26 19:16:13.779869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.711 [2024-11-26 19:16:13.779876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.711 [2024-11-26 19:16:13.779883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.711 [2024-11-26 19:16:13.779890] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.711 [2024-11-26 19:16:13.779894] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.711 [2024-11-26 19:16:13.779899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.711 [2024-11-26 19:16:13.789555] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.711 [2024-11-26 19:16:13.789566] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.711 [2024-11-26 19:16:13.789571] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.789576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.711 [2024-11-26 19:16:13.789594] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.789934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.711 [2024-11-26 19:16:13.789946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.711 [2024-11-26 19:16:13.789954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.711 [2024-11-26 19:16:13.789965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.711 [2024-11-26 19:16:13.789982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.711 [2024-11-26 19:16:13.789989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.711 [2024-11-26 19:16:13.789997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.711 [2024-11-26 19:16:13.790003] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.711 [2024-11-26 19:16:13.790008] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.711 [2024-11-26 19:16:13.790012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.711 [2024-11-26 19:16:13.799626] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.711 [2024-11-26 19:16:13.799639] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.711 [2024-11-26 19:16:13.799644] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.799648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.711 [2024-11-26 19:16:13.799663] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.799999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.711 [2024-11-26 19:16:13.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.711 [2024-11-26 19:16:13.800020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.711 [2024-11-26 19:16:13.800033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.711 [2024-11-26 19:16:13.800050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.711 [2024-11-26 19:16:13.800059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.711 [2024-11-26 19:16:13.800066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.711 [2024-11-26 19:16:13.800072] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.711 [2024-11-26 19:16:13.800077] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.711 [2024-11-26 19:16:13.800083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.711 [2024-11-26 19:16:13.809694] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.711 [2024-11-26 19:16:13.809705] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.711 [2024-11-26 19:16:13.809709] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.809717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.711 [2024-11-26 19:16:13.809731] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.810037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.711 [2024-11-26 19:16:13.810050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.711 [2024-11-26 19:16:13.810057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.711 [2024-11-26 19:16:13.810068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.711 [2024-11-26 19:16:13.810085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.711 [2024-11-26 19:16:13.810092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.711 [2024-11-26 19:16:13.810099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.711 [2024-11-26 19:16:13.810106] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.711 [2024-11-26 19:16:13.810110] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.711 [2024-11-26 19:16:13.810115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.711 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.711 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.711 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.711 [2024-11-26 19:16:13.819764] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.711 [2024-11-26 19:16:13.819776] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.711 [2024-11-26 19:16:13.819781] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.711 [2024-11-26 19:16:13.819786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.711 [2024-11-26 19:16:13.819800] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.711 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.711 [2024-11-26 19:16:13.820131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.711 [2024-11-26 19:16:13.820145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.711 [2024-11-26 19:16:13.820153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.712 [2024-11-26 19:16:13.820168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.712 [2024-11-26 19:16:13.820187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.712 [2024-11-26 19:16:13.820195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.712 [2024-11-26 19:16:13.820203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.712 [2024-11-26 19:16:13.820209] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.712 [2024-11-26 19:16:13.820219] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.712 [2024-11-26 19:16:13.820224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.712 [2024-11-26 19:16:13.829831] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:56.712 [2024-11-26 19:16:13.829844] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:56.712 [2024-11-26 19:16:13.829848] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:56.712 [2024-11-26 19:16:13.829853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:56.712 [2024-11-26 19:16:13.829867] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:56.712 [2024-11-26 19:16:13.830170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.712 [2024-11-26 19:16:13.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0de10 with addr=10.0.0.2, port=4420 00:25:56.712 [2024-11-26 19:16:13.830192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0de10 is same with the state(6) to be set 00:25:56.712 [2024-11-26 19:16:13.830203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0de10 (9): Bad file descriptor 00:25:56.712 [2024-11-26 19:16:13.830221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:56.712 [2024-11-26 19:16:13.830229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:56.712 [2024-11-26 19:16:13.830236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:56.712 [2024-11-26 19:16:13.830242] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:56.712 [2024-11-26 19:16:13.830247] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:56.712 [2024-11-26 19:16:13.830251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:56.712 [2024-11-26 19:16:13.835238] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:56.712 [2024-11-26 19:16:13.835257] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:56.712 19:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:58.095 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.096 19:16:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.096 19:16:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.035 [2024-11-26 19:16:16.204076] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.035 [2024-11-26 19:16:16.204090] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.035 [2024-11-26 19:16:16.204099] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.295 [2024-11-26 19:16:16.334481] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:59.557 [2024-11-26 19:16:16.599744] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:59.557 [2024-11-26 19:16:16.600382] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe434a0:1 started. 00:25:59.557 [2024-11-26 19:16:16.601789] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:59.557 [2024-11-26 19:16:16.601811] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.557 [2024-11-26 19:16:16.611764] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe434a0 was disconnected and freed. delete nvme_qpair. 00:25:59.557 request: 00:25:59.557 { 00:25:59.557 "name": "nvme", 00:25:59.557 "trtype": "tcp", 00:25:59.557 "traddr": "10.0.0.2", 00:25:59.557 "adrfam": "ipv4", 00:25:59.557 "trsvcid": "8009", 00:25:59.557 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.557 "wait_for_attach": true, 00:25:59.557 "method": "bdev_nvme_start_discovery", 00:25:59.557 "req_id": 1 00:25:59.557 } 00:25:59.557 Got JSON-RPC error response 00:25:59.557 response: 00:25:59.557 { 00:25:59.557 "code": -17, 00:25:59.557 "message": "File exists" 00:25:59.557 } 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:59.557 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.558 request: 00:25:59.558 { 00:25:59.558 "name": "nvme_second", 00:25:59.558 "trtype": "tcp", 00:25:59.558 "traddr": "10.0.0.2", 00:25:59.558 "adrfam": "ipv4", 00:25:59.558 "trsvcid": "8009", 00:25:59.558 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:59.558 "wait_for_attach": true, 00:25:59.558 "method": "bdev_nvme_start_discovery", 00:25:59.558 "req_id": 1 00:25:59.558 } 00:25:59.558 Got JSON-RPC error response 00:25:59.558 response: 00:25:59.558 { 00:25:59.558 "code": -17, 00:25:59.558 "message": "File exists" 00:25:59.558 } 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:59.558 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.818 19:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.758 [2024-11-26 19:16:17.861364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.758 [2024-11-26 19:16:17.861400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3e370 with addr=10.0.0.2, port=8010 00:26:00.758 [2024-11-26 19:16:17.861416] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:00.758 [2024-11-26 19:16:17.861422] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:00.758 [2024-11-26 19:16:17.861427] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:01.700 [2024-11-26 19:16:18.863569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.700 [2024-11-26 19:16:18.863588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3e370 with addr=10.0.0.2, port=8010 00:26:01.700 [2024-11-26 19:16:18.863597] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:01.700 [2024-11-26 19:16:18.863602] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:01.700 [2024-11-26 19:16:18.863607] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.084 [2024-11-26 19:16:19.865574] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:03.084 request: 00:26:03.084 { 00:26:03.084 "name": "nvme_second", 00:26:03.084 "trtype": "tcp", 00:26:03.084 "traddr": "10.0.0.2", 00:26:03.084 "adrfam": "ipv4", 00:26:03.084 "trsvcid": "8010", 00:26:03.084 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:03.084 "wait_for_attach": false, 00:26:03.084 "attach_timeout_ms": 3000, 00:26:03.084 "method": "bdev_nvme_start_discovery", 00:26:03.084 "req_id": 1 00:26:03.084 } 00:26:03.084 Got JSON-RPC error response 00:26:03.084 response: 00:26:03.084 { 00:26:03.084 "code": -110, 00:26:03.084 "message": "Connection timed out" 00:26:03.084 } 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3065202 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.084 rmmod nvme_tcp 00:26:03.084 rmmod nvme_fabrics 00:26:03.084 rmmod nvme_keyring 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3065135 ']' 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3065135 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3065135 ']' 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3065135 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.084 19:16:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065135 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065135' 00:26:03.084 killing process with pid 3065135 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3065135 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3065135 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.084 19:16:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.104 00:26:05.104 real 0m21.409s 00:26:05.104 user 0m25.659s 00:26:05.104 sys 0m7.374s 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.104 ************************************ 00:26:05.104 END TEST nvmf_host_discovery 00:26:05.104 ************************************ 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.104 19:16:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.396 ************************************ 00:26:05.396 START TEST nvmf_host_multipath_status 00:26:05.396 ************************************ 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:05.396 * Looking for test storage... 00:26:05.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:05.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.396 --rc genhtml_branch_coverage=1 00:26:05.396 --rc genhtml_function_coverage=1 00:26:05.396 --rc genhtml_legend=1 00:26:05.396 --rc geninfo_all_blocks=1 00:26:05.396 --rc geninfo_unexecuted_blocks=1 00:26:05.396 00:26:05.396 ' 00:26:05.396 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.397 --rc genhtml_branch_coverage=1 00:26:05.397 --rc genhtml_function_coverage=1 00:26:05.397 --rc genhtml_legend=1 00:26:05.397 --rc geninfo_all_blocks=1 00:26:05.397 --rc geninfo_unexecuted_blocks=1 00:26:05.397 00:26:05.397 ' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.397 --rc genhtml_branch_coverage=1 00:26:05.397 --rc genhtml_function_coverage=1 00:26:05.397 --rc genhtml_legend=1 00:26:05.397 --rc geninfo_all_blocks=1 00:26:05.397 --rc geninfo_unexecuted_blocks=1 00:26:05.397 00:26:05.397 ' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:05.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.397 --rc genhtml_branch_coverage=1 00:26:05.397 --rc genhtml_function_coverage=1 00:26:05.397 --rc genhtml_legend=1 00:26:05.397 --rc geninfo_all_blocks=1 00:26:05.397 --rc geninfo_unexecuted_blocks=1 00:26:05.397 00:26:05.397 ' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:05.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.397 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.398 19:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.529 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:13.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:13.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:13.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:13.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:13.530 19:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:13.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:26:13.530 00:26:13.530 --- 10.0.0.2 ping statistics --- 00:26:13.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.530 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:26:13.530 00:26:13.530 --- 10.0.0.1 ping statistics --- 00:26:13.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.530 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.530 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3071688 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3071688 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3071688 ']' 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.531 [2024-11-26 19:16:30.185840] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:26:13.531 [2024-11-26 19:16:30.185900] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.531 [2024-11-26 19:16:30.284752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:13.531 [2024-11-26 19:16:30.336043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.531 [2024-11-26 19:16:30.336101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.531 [2024-11-26 19:16:30.336112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.531 [2024-11-26 19:16:30.336120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.531 [2024-11-26 19:16:30.336126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.531 [2024-11-26 19:16:30.337969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.531 [2024-11-26 19:16:30.337970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3071688 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:13.531 [2024-11-26 19:16:30.664706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.531 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:13.790 Malloc0 00:26:13.791 19:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:14.050 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.310 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.310 [2024-11-26 19:16:31.498223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.571 [2024-11-26 19:16:31.694661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3072042 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3072042 /var/tmp/bdevperf.sock 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3072042 ']' 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.571 19:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.516 19:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.516 19:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:15.516 19:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:15.776 19:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.037 Nvme0n1 00:26:16.037 19:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:16.609 Nvme0n1 00:26:16.609 19:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:16.609 19:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:18.521 19:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:18.521 19:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:18.781 19:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.781 19:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:20.165 19:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:20.165 19:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.165 19:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.165 19:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.165 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.425 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.425 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.425 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.425 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.686 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.686 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.686 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.686 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.946 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.946 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.946 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.946 19:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.946 19:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.947 19:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:20.947 19:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.207 19:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.468 19:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:22.410 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:22.410 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.410 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.410 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.671 19:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.931 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.192 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.192 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.192 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.192 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:23.453 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.714 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:23.974 19:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:24.914 19:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:24.914 19:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.914 19:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.914 19:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.174 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.436 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.436 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.436 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.436 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.696 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.696 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.696 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.696 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.956 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.956 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.956 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.956 19:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.956 19:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.956 19:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:25.956 19:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.215 19:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.474 19:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:27.416 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:27.416 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.416 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.416 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.677 19:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.938 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.938 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.938 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.938 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.200 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.200 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.200 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.200 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:28.462 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.722 19:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.981 19:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:29.922 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:29.922 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.922 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.922 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.182 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.448 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.448 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.448 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.448 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.448 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.449 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.708 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.708 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.708 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.708 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.967 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.967 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.967 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.968 19:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.968 19:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.968 19:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:30.968 19:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:31.228 19:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:31.488 19:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:32.428 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:32.429 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.429 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.429 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.689 19:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.950 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.950 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.950 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.950 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.210 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.210 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:33.210 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.210 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.210 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.469 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:33.469 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.469 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.469 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.469 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:33.730 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:33.730 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:33.730 19:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.989 19:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:34.929 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:34.929 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:34.929 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.929 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.188 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.188 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:35.188 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.188 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.448 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.448 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.448 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.448 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.708 19:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.969 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.969 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.969 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.969 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.230 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.230 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:36.230 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:36.230 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.489 19:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:37.429 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:37.429 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.429 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.429 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.689 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.689 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.689 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.689 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.949 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.949 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.949 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.949 19:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.949 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.949 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.949 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.949 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.209 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.209 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.209 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.209 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.469 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.469 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.469 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.469 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.729 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.729 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:38.729 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.729 19:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:38.989 19:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:39.928 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:39.928 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.928 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.929 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.190 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.190 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.190 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.190 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.450 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.710 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.710 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.710 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.710 19:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.971 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.971 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.971 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.971 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.231 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.231 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:41.231 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.231 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:41.492 19:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:42.435 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:42.435 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:42.435 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.436 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.697 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.697 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:42.697 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.697 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.957 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.957 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.957 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.957 19:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.957 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.957 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.957 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.957 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.217 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.217 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.217 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.217 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.477 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.477 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:43.477 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.477 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3072042 ']' 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072042' 00:26:43.740 killing process with pid 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3072042 00:26:43.740 { 00:26:43.740 "results": [ 00:26:43.740 { 00:26:43.740 "job": "Nvme0n1", 00:26:43.740 "core_mask": "0x4", 00:26:43.740 "workload": "verify", 00:26:43.740 "status": "terminated", 00:26:43.740 "verify_range": { 00:26:43.740 "start": 0, 00:26:43.740 "length": 16384 00:26:43.740 }, 00:26:43.740 "queue_depth": 128, 00:26:43.740 "io_size": 4096, 00:26:43.740 "runtime": 27.100662, 00:26:43.740 "iops": 11888.676372555032, 00:26:43.740 "mibps": 46.44014208029309, 00:26:43.740 "io_failed": 0, 00:26:43.740 "io_timeout": 0, 00:26:43.740 "avg_latency_us": 10746.51491833519, 00:26:43.740 "min_latency_us": 344.74666666666667, 00:26:43.740 "max_latency_us": 3019898.88 00:26:43.740 } 00:26:43.740 ], 00:26:43.740 "core_count": 1 00:26:43.740 } 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3072042 00:26:43.740 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.740 [2024-11-26 19:16:31.784015] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:26:43.740 [2024-11-26 19:16:31.784098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072042 ] 00:26:43.740 [2024-11-26 19:16:31.876010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.740 [2024-11-26 19:16:31.926363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.740 Running I/O for 90 seconds... 00:26:43.740 9427.00 IOPS, 36.82 MiB/s [2024-11-26T18:17:00.953Z] 10231.00 IOPS, 39.96 MiB/s [2024-11-26T18:17:00.953Z] 10580.33 IOPS, 41.33 MiB/s [2024-11-26T18:17:00.953Z] 10761.75 IOPS, 42.04 MiB/s [2024-11-26T18:17:00.953Z] 11195.00 IOPS, 43.73 MiB/s [2024-11-26T18:17:00.953Z] 11461.67 IOPS, 44.77 MiB/s [2024-11-26T18:17:00.953Z] 11660.71 IOPS, 45.55 MiB/s [2024-11-26T18:17:00.953Z] 11816.25 IOPS, 46.16 MiB/s [2024-11-26T18:17:00.953Z] 11938.67 IOPS, 46.64 MiB/s [2024-11-26T18:17:00.953Z] 12016.40 IOPS, 46.94 MiB/s [2024-11-26T18:17:00.953Z] 12087.55 IOPS, 47.22 MiB/s [2024-11-26T18:17:00.953Z] 12152.83 IOPS, 47.47 MiB/s [2024-11-26T18:17:00.953Z] [2024-11-26 19:16:45.792577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.740 [2024-11-26 19:16:45.792609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.740 [2024-11-26 19:16:45.792643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.740 [2024-11-26 19:16:45.792649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.740 [2024-11-26 19:16:45.792661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.792878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.792883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.794981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.741 [2024-11-26 19:16:45.794986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.741 [2024-11-26 19:16:45.795000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.742 [2024-11-26 19:16:45.795647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.742 [2024-11-26 19:16:45.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:45.795848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:45.795853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.743 11338.62 IOPS, 44.29 MiB/s [2024-11-26T18:17:00.956Z] 10528.71 IOPS, 41.13 MiB/s [2024-11-26T18:17:00.956Z] 9826.80 IOPS, 38.39 MiB/s [2024-11-26T18:17:00.956Z] 9917.19 IOPS, 38.74 MiB/s [2024-11-26T18:17:00.956Z] 10094.59 IOPS, 39.43 MiB/s [2024-11-26T18:17:00.956Z] 10407.83 IOPS, 40.66 MiB/s [2024-11-26T18:17:00.956Z] 10750.95 IOPS, 42.00 MiB/s [2024-11-26T18:17:00.956Z] 11021.55 IOPS, 43.05 MiB/s [2024-11-26T18:17:00.956Z] 11118.62 IOPS, 43.43 MiB/s [2024-11-26T18:17:00.956Z] 11216.18 IOPS, 43.81 MiB/s [2024-11-26T18:17:00.956Z] 11385.96 IOPS, 44.48 MiB/s [2024-11-26T18:17:00.956Z] 11616.17 IOPS, 45.38 MiB/s [2024-11-26T18:17:00.956Z] [2024-11-26 19:16:58.559948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.559983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.560225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.560230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.561136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.561155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.561175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.561191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.743 [2024-11-26 19:16:58.561207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.743 [2024-11-26 19:16:58.561218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.744 [2024-11-26 19:16:58.561434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.744 [2024-11-26 19:16:58.561451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.744 [2024-11-26 19:16:58.561467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.744 [2024-11-26 19:16:58.561483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.744 [2024-11-26 19:16:58.561921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.744 [2024-11-26 19:16:58.561932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.561937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.561947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.561952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.561963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.561969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.561979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.561984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.561994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.561999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.745 [2024-11-26 19:16:58.562203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.745 [2024-11-26 19:16:58.562218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.745 [2024-11-26 19:16:58.562234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.745 [2024-11-26 19:16:58.562249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.745 [2024-11-26 19:16:58.562260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.745 [2024-11-26 19:16:58.562265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.745 11805.60 IOPS, 46.12 MiB/s [2024-11-26T18:17:00.958Z] 11851.19 IOPS, 46.29 MiB/s [2024-11-26T18:17:00.958Z] 11889.56 IOPS, 46.44 MiB/s [2024-11-26T18:17:00.958Z] Received shutdown signal, test time was about 27.101270 seconds 00:26:43.745 00:26:43.745 Latency(us) 00:26:43.745 [2024-11-26T18:17:00.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.745 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.745 Verification LBA range: start 0x0 length 0x4000 00:26:43.745 Nvme0n1 : 27.10 11888.68 46.44 0.00 0.00 10746.51 344.75 3019898.88 00:26:43.745 [2024-11-26T18:17:00.958Z] =================================================================================================================== 00:26:43.745 [2024-11-26T18:17:00.958Z] Total : 11888.68 46.44 0.00 0.00 10746.51 344.75 3019898.88 00:26:43.745 19:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.007 rmmod nvme_tcp 00:26:44.007 rmmod nvme_fabrics 00:26:44.007 rmmod nvme_keyring 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3071688 ']' 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3071688 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3071688 ']' 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3071688 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.007 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071688 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071688' 00:26:44.268 killing process with pid 3071688 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3071688 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3071688 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.268 19:17:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.808 00:26:46.808 real 0m41.123s 00:26:46.808 user 1m46.582s 00:26:46.808 sys 0m11.780s 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.808 ************************************ 00:26:46.808 END TEST nvmf_host_multipath_status 00:26:46.808 ************************************ 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.808 ************************************ 00:26:46.808 START TEST nvmf_discovery_remove_ifc 00:26:46.808 ************************************ 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.808 * Looking for test storage... 00:26:46.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.808 --rc genhtml_branch_coverage=1 00:26:46.808 --rc genhtml_function_coverage=1 00:26:46.808 --rc genhtml_legend=1 00:26:46.808 --rc geninfo_all_blocks=1 00:26:46.808 --rc geninfo_unexecuted_blocks=1 00:26:46.808 00:26:46.808 ' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.808 --rc genhtml_branch_coverage=1 00:26:46.808 --rc genhtml_function_coverage=1 00:26:46.808 --rc genhtml_legend=1 00:26:46.808 --rc geninfo_all_blocks=1 00:26:46.808 --rc geninfo_unexecuted_blocks=1 00:26:46.808 00:26:46.808 ' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.808 --rc genhtml_branch_coverage=1 00:26:46.808 --rc genhtml_function_coverage=1 00:26:46.808 --rc genhtml_legend=1 00:26:46.808 --rc geninfo_all_blocks=1 00:26:46.808 --rc geninfo_unexecuted_blocks=1 00:26:46.808 00:26:46.808 ' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:46.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.808 --rc genhtml_branch_coverage=1 00:26:46.808 --rc genhtml_function_coverage=1 00:26:46.808 --rc genhtml_legend=1 00:26:46.808 --rc geninfo_all_blocks=1 00:26:46.808 --rc geninfo_unexecuted_blocks=1 00:26:46.808 00:26:46.808 ' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.808 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.809 19:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.957 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.958 19:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:26:54.958 00:26:54.958 --- 10.0.0.2 ping statistics --- 00:26:54.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.958 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:54.958 00:26:54.958 --- 10.0.0.1 ping statistics --- 00:26:54.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.958 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3082167 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3082167 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3082167 ']' 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.958 19:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.958 [2024-11-26 19:17:11.356936] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:26:54.958 [2024-11-26 19:17:11.357005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.958 [2024-11-26 19:17:11.459481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.958 [2024-11-26 19:17:11.510012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.958 [2024-11-26 19:17:11.510060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.958 [2024-11-26 19:17:11.510069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.958 [2024-11-26 19:17:11.510077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.958 [2024-11-26 19:17:11.510083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.958 [2024-11-26 19:17:11.510901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.271 [2024-11-26 19:17:12.247595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.271 [2024-11-26 19:17:12.255869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:55.271 null0 00:26:55.271 [2024-11-26 19:17:12.287800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3082464 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3082464 /tmp/host.sock 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3082464 ']' 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:55.271 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:55.271 19:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.271 [2024-11-26 19:17:12.364402] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:26:55.271 [2024-11-26 19:17:12.364473] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082464 ] 00:26:55.575 [2024-11-26 19:17:12.458447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.575 [2024-11-26 19:17:12.511855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.174 19:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.116 [2024-11-26 19:17:14.316775] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:57.116 [2024-11-26 19:17:14.316807] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:57.116 [2024-11-26 19:17:14.316822] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.376 [2024-11-26 19:17:14.406109] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:57.637 [2024-11-26 19:17:14.589604] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:57.637 [2024-11-26 19:17:14.590995] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x133e410:1 started. 00:26:57.637 [2024-11-26 19:17:14.592809] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:57.637 [2024-11-26 19:17:14.592877] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:57.637 [2024-11-26 19:17:14.592903] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:57.637 [2024-11-26 19:17:14.592921] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:57.637 [2024-11-26 19:17:14.592946] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.637 [2024-11-26 19:17:14.637040] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x133e410 was disconnected and freed. delete nvme_qpair. 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.637 19:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.019 19:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.958 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.958 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.959 19:17:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.899 19:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.840 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.100 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.100 19:17:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.039 [2024-11-26 19:17:20.032793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:03.039 [2024-11-26 19:17:20.032829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.039 [2024-11-26 19:17:20.032839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.039 [2024-11-26 19:17:20.032847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.039 [2024-11-26 19:17:20.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.039 [2024-11-26 19:17:20.032858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.039 [2024-11-26 19:17:20.032868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.039 [2024-11-26 19:17:20.032874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.039 [2024-11-26 19:17:20.032879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.039 [2024-11-26 19:17:20.032885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.039 [2024-11-26 19:17:20.032890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.039 [2024-11-26 19:17:20.032895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ac50 is same with the state(6) to be set 00:27:03.039 [2024-11-26 19:17:20.042814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ac50 (9): Bad file descriptor 00:27:03.039 [2024-11-26 19:17:20.052848] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:03.039 [2024-11-26 19:17:20.052857] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:03.039 [2024-11-26 19:17:20.052861] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.039 [2024-11-26 19:17:20.052865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.039 [2024-11-26 19:17:20.052881] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.039 19:17:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.979 [2024-11-26 19:17:21.115264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:03.979 [2024-11-26 19:17:21.115365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131ac50 with addr=10.0.0.2, port=4420 00:27:03.979 [2024-11-26 19:17:21.115400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131ac50 is same with the state(6) to be set 00:27:03.979 [2024-11-26 19:17:21.115464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131ac50 (9): Bad file descriptor 00:27:03.979 [2024-11-26 19:17:21.116611] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:03.979 [2024-11-26 19:17:21.116685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:03.979 [2024-11-26 19:17:21.116708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:03.979 [2024-11-26 19:17:21.116732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:03.979 [2024-11-26 19:17:21.116753] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:03.979 [2024-11-26 19:17:21.116770] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:03.979 [2024-11-26 19:17:21.116796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:03.979 [2024-11-26 19:17:21.116819] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:03.979 [2024-11-26 19:17:21.116834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:03.979 19:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.979 19:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.979 19:17:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.918 [2024-11-26 19:17:22.119262] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:04.918 [2024-11-26 19:17:22.119285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:04.918 [2024-11-26 19:17:22.119298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:04.918 [2024-11-26 19:17:22.119304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:04.918 [2024-11-26 19:17:22.119310] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:04.918 [2024-11-26 19:17:22.119316] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:04.918 [2024-11-26 19:17:22.119320] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:04.918 [2024-11-26 19:17:22.119324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:04.918 [2024-11-26 19:17:22.119346] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:04.918 [2024-11-26 19:17:22.119373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.918 [2024-11-26 19:17:22.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.918 [2024-11-26 19:17:22.119391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.918 [2024-11-26 19:17:22.119396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.918 [2024-11-26 19:17:22.119403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.918 [2024-11-26 19:17:22.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.918 [2024-11-26 19:17:22.119414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.918 [2024-11-26 19:17:22.119419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.918 [2024-11-26 19:17:22.119425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.918 [2024-11-26 19:17:22.119431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.918 [2024-11-26 19:17:22.119437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:04.918 [2024-11-26 19:17:22.119898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130a350 (9): Bad file descriptor 00:27:04.918 [2024-11-26 19:17:22.120908] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:04.918 [2024-11-26 19:17:22.120918] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:05.179 19:17:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.561 19:17:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.131 [2024-11-26 19:17:24.131339] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:07.131 [2024-11-26 19:17:24.131354] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:07.131 [2024-11-26 19:17:24.131364] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:07.131 [2024-11-26 19:17:24.258744] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:07.391 [2024-11-26 19:17:24.359554] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:07.391 [2024-11-26 19:17:24.360257] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x12f3eb0:1 started. 00:27:07.391 [2024-11-26 19:17:24.361147] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:07.391 [2024-11-26 19:17:24.361179] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:07.391 [2024-11-26 19:17:24.361195] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:07.391 [2024-11-26 19:17:24.361205] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:07.391 [2024-11-26 19:17:24.361211] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:07.391 [2024-11-26 19:17:24.369564] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x12f3eb0 was disconnected and freed. delete nvme_qpair. 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3082464 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3082464 ']' 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3082464 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3082464 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3082464' 00:27:07.391 killing process with pid 3082464 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3082464 00:27:07.391 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3082464 00:27:07.651 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:07.651 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.651 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:07.651 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.651 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.652 rmmod nvme_tcp 00:27:07.652 rmmod nvme_fabrics 00:27:07.652 rmmod nvme_keyring 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3082167 ']' 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3082167 ']' 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3082167' 00:27:07.652 killing process with pid 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3082167 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.652 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.913 19:17:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.822 00:27:09.822 real 0m23.417s 00:27:09.822 user 0m27.348s 00:27:09.822 sys 0m7.195s 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.822 ************************************ 00:27:09.822 END TEST nvmf_discovery_remove_ifc 00:27:09.822 ************************************ 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.822 19:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.822 ************************************ 00:27:09.822 START TEST nvmf_identify_kernel_target 00:27:09.822 ************************************ 00:27:09.822 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:10.083 * Looking for test storage... 00:27:10.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.083 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.084 --rc genhtml_branch_coverage=1 00:27:10.084 --rc genhtml_function_coverage=1 00:27:10.084 --rc genhtml_legend=1 00:27:10.084 --rc geninfo_all_blocks=1 00:27:10.084 --rc geninfo_unexecuted_blocks=1 00:27:10.084 00:27:10.084 ' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.084 --rc genhtml_branch_coverage=1 00:27:10.084 --rc genhtml_function_coverage=1 00:27:10.084 --rc genhtml_legend=1 00:27:10.084 --rc geninfo_all_blocks=1 00:27:10.084 --rc geninfo_unexecuted_blocks=1 00:27:10.084 00:27:10.084 ' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.084 --rc genhtml_branch_coverage=1 00:27:10.084 --rc genhtml_function_coverage=1 00:27:10.084 --rc genhtml_legend=1 00:27:10.084 --rc geninfo_all_blocks=1 00:27:10.084 --rc geninfo_unexecuted_blocks=1 00:27:10.084 00:27:10.084 ' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.084 --rc genhtml_branch_coverage=1 00:27:10.084 --rc genhtml_function_coverage=1 00:27:10.084 --rc genhtml_legend=1 00:27:10.084 --rc geninfo_all_blocks=1 00:27:10.084 --rc geninfo_unexecuted_blocks=1 00:27:10.084 00:27:10.084 ' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.084 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.085 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.085 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.085 19:17:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:18.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:18.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.225 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:18.226 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:18.226 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:27:18.226 00:27:18.226 --- 10.0.0.2 ping statistics --- 00:27:18.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.226 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:27:18.226 00:27:18.226 --- 10.0.0.1 ping statistics --- 00:27:18.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.226 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:18.226 19:17:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:21.524 Waiting for block devices as requested 00:27:21.524 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:21.524 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:21.524 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:21.524 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:21.524 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:21.524 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:21.784 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:21.784 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:21.784 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:22.047 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:22.047 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:22.309 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:22.309 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:22.309 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:22.569 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:22.569 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:22.569 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:22.829 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:23.089 No valid GPT data, bailing 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:23.089 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:23.090 00:27:23.090 Discovery Log Number of Records 2, Generation counter 2 00:27:23.090 =====Discovery Log Entry 0====== 00:27:23.090 trtype: tcp 00:27:23.090 adrfam: ipv4 00:27:23.090 subtype: current discovery subsystem 00:27:23.090 treq: not specified, sq flow control disable supported 00:27:23.090 portid: 1 00:27:23.090 trsvcid: 4420 00:27:23.090 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:23.090 traddr: 10.0.0.1 00:27:23.090 eflags: none 00:27:23.090 sectype: none 00:27:23.090 =====Discovery Log Entry 1====== 00:27:23.090 trtype: tcp 00:27:23.090 adrfam: ipv4 00:27:23.090 subtype: nvme subsystem 00:27:23.090 treq: not specified, sq flow control disable supported 00:27:23.090 portid: 1 00:27:23.090 trsvcid: 4420 00:27:23.090 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:23.090 traddr: 10.0.0.1 00:27:23.090 eflags: none 00:27:23.090 sectype: none 00:27:23.090 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:23.090 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:23.351 ===================================================== 00:27:23.351 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:23.351 ===================================================== 00:27:23.351 Controller Capabilities/Features 00:27:23.351 ================================ 00:27:23.351 Vendor ID: 0000 00:27:23.351 Subsystem Vendor ID: 0000 00:27:23.351 Serial Number: 02e733b84f8da781b439 00:27:23.351 Model Number: Linux 00:27:23.351 Firmware Version: 6.8.9-20 00:27:23.351 Recommended Arb Burst: 0 00:27:23.351 IEEE OUI Identifier: 00 00 00 00:27:23.351 Multi-path I/O 00:27:23.351 May have multiple subsystem ports: No 00:27:23.351 May have multiple controllers: No 00:27:23.351 Associated with SR-IOV VF: No 00:27:23.351 Max Data Transfer Size: Unlimited 00:27:23.351 Max Number of Namespaces: 0 00:27:23.351 Max Number of I/O Queues: 1024 00:27:23.351 NVMe Specification Version (VS): 1.3 00:27:23.351 NVMe Specification Version (Identify): 1.3 00:27:23.351 Maximum Queue Entries: 1024 00:27:23.351 Contiguous Queues Required: No 00:27:23.351 Arbitration Mechanisms Supported 00:27:23.351 Weighted Round Robin: Not Supported 00:27:23.351 Vendor Specific: Not Supported 00:27:23.351 Reset Timeout: 7500 ms 00:27:23.351 Doorbell Stride: 4 bytes 00:27:23.351 NVM Subsystem Reset: Not Supported 00:27:23.351 Command Sets Supported 00:27:23.351 NVM Command Set: Supported 00:27:23.351 Boot Partition: Not Supported 00:27:23.351 Memory Page Size Minimum: 4096 bytes 00:27:23.351 Memory Page Size Maximum: 4096 bytes 00:27:23.351 Persistent Memory Region: Not Supported 00:27:23.351 Optional Asynchronous Events Supported 00:27:23.351 Namespace Attribute Notices: Not Supported 00:27:23.351 Firmware Activation Notices: Not Supported 00:27:23.351 ANA Change Notices: Not Supported 00:27:23.351 PLE Aggregate Log Change Notices: Not Supported 00:27:23.351 LBA Status Info Alert Notices: Not Supported 00:27:23.351 EGE Aggregate Log Change Notices: Not Supported 00:27:23.351 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.351 Zone Descriptor Change Notices: Not Supported 00:27:23.351 Discovery Log Change Notices: Supported 00:27:23.351 Controller Attributes 00:27:23.351 128-bit Host Identifier: Not Supported 00:27:23.351 Non-Operational Permissive Mode: Not Supported 00:27:23.351 NVM Sets: Not Supported 00:27:23.351 Read Recovery Levels: Not Supported 00:27:23.351 Endurance Groups: Not Supported 00:27:23.351 Predictable Latency Mode: Not Supported 00:27:23.351 Traffic Based Keep ALive: Not Supported 00:27:23.351 Namespace Granularity: Not Supported 00:27:23.351 SQ Associations: Not Supported 00:27:23.351 UUID List: Not Supported 00:27:23.351 Multi-Domain Subsystem: Not Supported 00:27:23.351 Fixed Capacity Management: Not Supported 00:27:23.351 Variable Capacity Management: Not Supported 00:27:23.351 Delete Endurance Group: Not Supported 00:27:23.351 Delete NVM Set: Not Supported 00:27:23.351 Extended LBA Formats Supported: Not Supported 00:27:23.351 Flexible Data Placement Supported: Not Supported 00:27:23.351 00:27:23.351 Controller Memory Buffer Support 00:27:23.351 ================================ 00:27:23.351 Supported: No 00:27:23.351 00:27:23.351 Persistent Memory Region Support 00:27:23.351 ================================ 00:27:23.351 Supported: No 00:27:23.351 00:27:23.351 Admin Command Set Attributes 00:27:23.351 ============================ 00:27:23.351 Security Send/Receive: Not Supported 00:27:23.351 Format NVM: Not Supported 00:27:23.351 Firmware Activate/Download: Not Supported 00:27:23.351 Namespace Management: Not Supported 00:27:23.351 Device Self-Test: Not Supported 00:27:23.351 Directives: Not Supported 00:27:23.351 NVMe-MI: Not Supported 00:27:23.351 Virtualization Management: Not Supported 00:27:23.351 Doorbell Buffer Config: Not Supported 00:27:23.351 Get LBA Status Capability: Not Supported 00:27:23.351 Command & Feature Lockdown Capability: Not Supported 00:27:23.351 Abort Command Limit: 1 00:27:23.351 Async Event Request Limit: 1 00:27:23.351 Number of Firmware Slots: N/A 00:27:23.351 Firmware Slot 1 Read-Only: N/A 00:27:23.351 Firmware Activation Without Reset: N/A 00:27:23.351 Multiple Update Detection Support: N/A 00:27:23.351 Firmware Update Granularity: No Information Provided 00:27:23.351 Per-Namespace SMART Log: No 00:27:23.351 Asymmetric Namespace Access Log Page: Not Supported 00:27:23.351 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:23.351 Command Effects Log Page: Not Supported 00:27:23.351 Get Log Page Extended Data: Supported 00:27:23.351 Telemetry Log Pages: Not Supported 00:27:23.351 Persistent Event Log Pages: Not Supported 00:27:23.351 Supported Log Pages Log Page: May Support 00:27:23.351 Commands Supported & Effects Log Page: Not Supported 00:27:23.351 Feature Identifiers & Effects Log Page:May Support 00:27:23.351 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.351 Data Area 4 for Telemetry Log: Not Supported 00:27:23.351 Error Log Page Entries Supported: 1 00:27:23.351 Keep Alive: Not Supported 00:27:23.351 00:27:23.351 NVM Command Set Attributes 00:27:23.351 ========================== 00:27:23.351 Submission Queue Entry Size 00:27:23.351 Max: 1 00:27:23.351 Min: 1 00:27:23.351 Completion Queue Entry Size 00:27:23.351 Max: 1 00:27:23.352 Min: 1 00:27:23.352 Number of Namespaces: 0 00:27:23.352 Compare Command: Not Supported 00:27:23.352 Write Uncorrectable Command: Not Supported 00:27:23.352 Dataset Management Command: Not Supported 00:27:23.352 Write Zeroes Command: Not Supported 00:27:23.352 Set Features Save Field: Not Supported 00:27:23.352 Reservations: Not Supported 00:27:23.352 Timestamp: Not Supported 00:27:23.352 Copy: Not Supported 00:27:23.352 Volatile Write Cache: Not Present 00:27:23.352 Atomic Write Unit (Normal): 1 00:27:23.352 Atomic Write Unit (PFail): 1 00:27:23.352 Atomic Compare & Write Unit: 1 00:27:23.352 Fused Compare & Write: Not Supported 00:27:23.352 Scatter-Gather List 00:27:23.352 SGL Command Set: Supported 00:27:23.352 SGL Keyed: Not Supported 00:27:23.352 SGL Bit Bucket Descriptor: Not Supported 00:27:23.352 SGL Metadata Pointer: Not Supported 00:27:23.352 Oversized SGL: Not Supported 00:27:23.352 SGL Metadata Address: Not Supported 00:27:23.352 SGL Offset: Supported 00:27:23.352 Transport SGL Data Block: Not Supported 00:27:23.352 Replay Protected Memory Block: Not Supported 00:27:23.352 00:27:23.352 Firmware Slot Information 00:27:23.352 ========================= 00:27:23.352 Active slot: 0 00:27:23.352 00:27:23.352 00:27:23.352 Error Log 00:27:23.352 ========= 00:27:23.352 00:27:23.352 Active Namespaces 00:27:23.352 ================= 00:27:23.352 Discovery Log Page 00:27:23.352 ================== 00:27:23.352 Generation Counter: 2 00:27:23.352 Number of Records: 2 00:27:23.352 Record Format: 0 00:27:23.352 00:27:23.352 Discovery Log Entry 0 00:27:23.352 ---------------------- 00:27:23.352 Transport Type: 3 (TCP) 00:27:23.352 Address Family: 1 (IPv4) 00:27:23.352 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:23.352 Entry Flags: 00:27:23.352 Duplicate Returned Information: 0 00:27:23.352 Explicit Persistent Connection Support for Discovery: 0 00:27:23.352 Transport Requirements: 00:27:23.352 Secure Channel: Not Specified 00:27:23.352 Port ID: 1 (0x0001) 00:27:23.352 Controller ID: 65535 (0xffff) 00:27:23.352 Admin Max SQ Size: 32 00:27:23.352 Transport Service Identifier: 4420 00:27:23.352 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:23.352 Transport Address: 10.0.0.1 00:27:23.352 Discovery Log Entry 1 00:27:23.352 ---------------------- 00:27:23.352 Transport Type: 3 (TCP) 00:27:23.352 Address Family: 1 (IPv4) 00:27:23.352 Subsystem Type: 2 (NVM Subsystem) 00:27:23.352 Entry Flags: 00:27:23.352 Duplicate Returned Information: 0 00:27:23.352 Explicit Persistent Connection Support for Discovery: 0 00:27:23.352 Transport Requirements: 00:27:23.352 Secure Channel: Not Specified 00:27:23.352 Port ID: 1 (0x0001) 00:27:23.352 Controller ID: 65535 (0xffff) 00:27:23.352 Admin Max SQ Size: 32 00:27:23.352 Transport Service Identifier: 4420 00:27:23.352 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:23.352 Transport Address: 10.0.0.1 00:27:23.352 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.352 get_feature(0x01) failed 00:27:23.352 get_feature(0x02) failed 00:27:23.352 get_feature(0x04) failed 00:27:23.352 ===================================================== 00:27:23.352 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.352 ===================================================== 00:27:23.352 Controller Capabilities/Features 00:27:23.352 ================================ 00:27:23.352 Vendor ID: 0000 00:27:23.352 Subsystem Vendor ID: 0000 00:27:23.352 Serial Number: 98d22445775109b4762e 00:27:23.352 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:23.352 Firmware Version: 6.8.9-20 00:27:23.352 Recommended Arb Burst: 6 00:27:23.352 IEEE OUI Identifier: 00 00 00 00:27:23.352 Multi-path I/O 00:27:23.352 May have multiple subsystem ports: Yes 00:27:23.352 May have multiple controllers: Yes 00:27:23.352 Associated with SR-IOV VF: No 00:27:23.352 Max Data Transfer Size: Unlimited 00:27:23.352 Max Number of Namespaces: 1024 00:27:23.352 Max Number of I/O Queues: 128 00:27:23.352 NVMe Specification Version (VS): 1.3 00:27:23.352 NVMe Specification Version (Identify): 1.3 00:27:23.352 Maximum Queue Entries: 1024 00:27:23.352 Contiguous Queues Required: No 00:27:23.352 Arbitration Mechanisms Supported 00:27:23.352 Weighted Round Robin: Not Supported 00:27:23.352 Vendor Specific: Not Supported 00:27:23.352 Reset Timeout: 7500 ms 00:27:23.352 Doorbell Stride: 4 bytes 00:27:23.352 NVM Subsystem Reset: Not Supported 00:27:23.352 Command Sets Supported 00:27:23.352 NVM Command Set: Supported 00:27:23.352 Boot Partition: Not Supported 00:27:23.352 Memory Page Size Minimum: 4096 bytes 00:27:23.352 Memory Page Size Maximum: 4096 bytes 00:27:23.352 Persistent Memory Region: Not Supported 00:27:23.352 Optional Asynchronous Events Supported 00:27:23.352 Namespace Attribute Notices: Supported 00:27:23.352 Firmware Activation Notices: Not Supported 00:27:23.352 ANA Change Notices: Supported 00:27:23.352 PLE Aggregate Log Change Notices: Not Supported 00:27:23.352 LBA Status Info Alert Notices: Not Supported 00:27:23.352 EGE Aggregate Log Change Notices: Not Supported 00:27:23.352 Normal NVM Subsystem Shutdown event: Not Supported 00:27:23.352 Zone Descriptor Change Notices: Not Supported 00:27:23.352 Discovery Log Change Notices: Not Supported 00:27:23.352 Controller Attributes 00:27:23.352 128-bit Host Identifier: Supported 00:27:23.352 Non-Operational Permissive Mode: Not Supported 00:27:23.352 NVM Sets: Not Supported 00:27:23.352 Read Recovery Levels: Not Supported 00:27:23.352 Endurance Groups: Not Supported 00:27:23.352 Predictable Latency Mode: Not Supported 00:27:23.352 Traffic Based Keep ALive: Supported 00:27:23.352 Namespace Granularity: Not Supported 00:27:23.352 SQ Associations: Not Supported 00:27:23.352 UUID List: Not Supported 00:27:23.352 Multi-Domain Subsystem: Not Supported 00:27:23.352 Fixed Capacity Management: Not Supported 00:27:23.352 Variable Capacity Management: Not Supported 00:27:23.352 Delete Endurance Group: Not Supported 00:27:23.352 Delete NVM Set: Not Supported 00:27:23.352 Extended LBA Formats Supported: Not Supported 00:27:23.352 Flexible Data Placement Supported: Not Supported 00:27:23.352 00:27:23.352 Controller Memory Buffer Support 00:27:23.352 ================================ 00:27:23.352 Supported: No 00:27:23.352 00:27:23.352 Persistent Memory Region Support 00:27:23.352 ================================ 00:27:23.352 Supported: No 00:27:23.352 00:27:23.352 Admin Command Set Attributes 00:27:23.352 ============================ 00:27:23.352 Security Send/Receive: Not Supported 00:27:23.352 Format NVM: Not Supported 00:27:23.352 Firmware Activate/Download: Not Supported 00:27:23.352 Namespace Management: Not Supported 00:27:23.352 Device Self-Test: Not Supported 00:27:23.352 Directives: Not Supported 00:27:23.352 NVMe-MI: Not Supported 00:27:23.352 Virtualization Management: Not Supported 00:27:23.352 Doorbell Buffer Config: Not Supported 00:27:23.352 Get LBA Status Capability: Not Supported 00:27:23.352 Command & Feature Lockdown Capability: Not Supported 00:27:23.352 Abort Command Limit: 4 00:27:23.352 Async Event Request Limit: 4 00:27:23.352 Number of Firmware Slots: N/A 00:27:23.352 Firmware Slot 1 Read-Only: N/A 00:27:23.352 Firmware Activation Without Reset: N/A 00:27:23.352 Multiple Update Detection Support: N/A 00:27:23.352 Firmware Update Granularity: No Information Provided 00:27:23.352 Per-Namespace SMART Log: Yes 00:27:23.352 Asymmetric Namespace Access Log Page: Supported 00:27:23.352 ANA Transition Time : 10 sec 00:27:23.352 00:27:23.352 Asymmetric Namespace Access Capabilities 00:27:23.352 ANA Optimized State : Supported 00:27:23.352 ANA Non-Optimized State : Supported 00:27:23.352 ANA Inaccessible State : Supported 00:27:23.352 ANA Persistent Loss State : Supported 00:27:23.352 ANA Change State : Supported 00:27:23.352 ANAGRPID is not changed : No 00:27:23.352 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:23.352 00:27:23.352 ANA Group Identifier Maximum : 128 00:27:23.352 Number of ANA Group Identifiers : 128 00:27:23.352 Max Number of Allowed Namespaces : 1024 00:27:23.352 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:23.352 Command Effects Log Page: Supported 00:27:23.352 Get Log Page Extended Data: Supported 00:27:23.352 Telemetry Log Pages: Not Supported 00:27:23.352 Persistent Event Log Pages: Not Supported 00:27:23.352 Supported Log Pages Log Page: May Support 00:27:23.352 Commands Supported & Effects Log Page: Not Supported 00:27:23.352 Feature Identifiers & Effects Log Page:May Support 00:27:23.352 NVMe-MI Commands & Effects Log Page: May Support 00:27:23.353 Data Area 4 for Telemetry Log: Not Supported 00:27:23.353 Error Log Page Entries Supported: 128 00:27:23.353 Keep Alive: Supported 00:27:23.353 Keep Alive Granularity: 1000 ms 00:27:23.353 00:27:23.353 NVM Command Set Attributes 00:27:23.353 ========================== 00:27:23.353 Submission Queue Entry Size 00:27:23.353 Max: 64 00:27:23.353 Min: 64 00:27:23.353 Completion Queue Entry Size 00:27:23.353 Max: 16 00:27:23.353 Min: 16 00:27:23.353 Number of Namespaces: 1024 00:27:23.353 Compare Command: Not Supported 00:27:23.353 Write Uncorrectable Command: Not Supported 00:27:23.353 Dataset Management Command: Supported 00:27:23.353 Write Zeroes Command: Supported 00:27:23.353 Set Features Save Field: Not Supported 00:27:23.353 Reservations: Not Supported 00:27:23.353 Timestamp: Not Supported 00:27:23.353 Copy: Not Supported 00:27:23.353 Volatile Write Cache: Present 00:27:23.353 Atomic Write Unit (Normal): 1 00:27:23.353 Atomic Write Unit (PFail): 1 00:27:23.353 Atomic Compare & Write Unit: 1 00:27:23.353 Fused Compare & Write: Not Supported 00:27:23.353 Scatter-Gather List 00:27:23.353 SGL Command Set: Supported 00:27:23.353 SGL Keyed: Not Supported 00:27:23.353 SGL Bit Bucket Descriptor: Not Supported 00:27:23.353 SGL Metadata Pointer: Not Supported 00:27:23.353 Oversized SGL: Not Supported 00:27:23.353 SGL Metadata Address: Not Supported 00:27:23.353 SGL Offset: Supported 00:27:23.353 Transport SGL Data Block: Not Supported 00:27:23.353 Replay Protected Memory Block: Not Supported 00:27:23.353 00:27:23.353 Firmware Slot Information 00:27:23.353 ========================= 00:27:23.353 Active slot: 0 00:27:23.353 00:27:23.353 Asymmetric Namespace Access 00:27:23.353 =========================== 00:27:23.353 Change Count : 0 00:27:23.353 Number of ANA Group Descriptors : 1 00:27:23.353 ANA Group Descriptor : 0 00:27:23.353 ANA Group ID : 1 00:27:23.353 Number of NSID Values : 1 00:27:23.353 Change Count : 0 00:27:23.353 ANA State : 1 00:27:23.353 Namespace Identifier : 1 00:27:23.353 00:27:23.353 Commands Supported and Effects 00:27:23.353 ============================== 00:27:23.353 Admin Commands 00:27:23.353 -------------- 00:27:23.353 Get Log Page (02h): Supported 00:27:23.353 Identify (06h): Supported 00:27:23.353 Abort (08h): Supported 00:27:23.353 Set Features (09h): Supported 00:27:23.353 Get Features (0Ah): Supported 00:27:23.353 Asynchronous Event Request (0Ch): Supported 00:27:23.353 Keep Alive (18h): Supported 00:27:23.353 I/O Commands 00:27:23.353 ------------ 00:27:23.353 Flush (00h): Supported 00:27:23.353 Write (01h): Supported LBA-Change 00:27:23.353 Read (02h): Supported 00:27:23.353 Write Zeroes (08h): Supported LBA-Change 00:27:23.353 Dataset Management (09h): Supported 00:27:23.353 00:27:23.353 Error Log 00:27:23.353 ========= 00:27:23.353 Entry: 0 00:27:23.353 Error Count: 0x3 00:27:23.353 Submission Queue Id: 0x0 00:27:23.353 Command Id: 0x5 00:27:23.353 Phase Bit: 0 00:27:23.353 Status Code: 0x2 00:27:23.353 Status Code Type: 0x0 00:27:23.353 Do Not Retry: 1 00:27:23.353 Error Location: 0x28 00:27:23.353 LBA: 0x0 00:27:23.353 Namespace: 0x0 00:27:23.353 Vendor Log Page: 0x0 00:27:23.353 ----------- 00:27:23.353 Entry: 1 00:27:23.353 Error Count: 0x2 00:27:23.353 Submission Queue Id: 0x0 00:27:23.353 Command Id: 0x5 00:27:23.353 Phase Bit: 0 00:27:23.353 Status Code: 0x2 00:27:23.353 Status Code Type: 0x0 00:27:23.353 Do Not Retry: 1 00:27:23.353 Error Location: 0x28 00:27:23.353 LBA: 0x0 00:27:23.353 Namespace: 0x0 00:27:23.353 Vendor Log Page: 0x0 00:27:23.353 ----------- 00:27:23.353 Entry: 2 00:27:23.353 Error Count: 0x1 00:27:23.353 Submission Queue Id: 0x0 00:27:23.353 Command Id: 0x4 00:27:23.353 Phase Bit: 0 00:27:23.353 Status Code: 0x2 00:27:23.353 Status Code Type: 0x0 00:27:23.353 Do Not Retry: 1 00:27:23.353 Error Location: 0x28 00:27:23.353 LBA: 0x0 00:27:23.353 Namespace: 0x0 00:27:23.353 Vendor Log Page: 0x0 00:27:23.353 00:27:23.353 Number of Queues 00:27:23.353 ================ 00:27:23.353 Number of I/O Submission Queues: 128 00:27:23.353 Number of I/O Completion Queues: 128 00:27:23.353 00:27:23.353 ZNS Specific Controller Data 00:27:23.353 ============================ 00:27:23.353 Zone Append Size Limit: 0 00:27:23.353 00:27:23.353 00:27:23.353 Active Namespaces 00:27:23.353 ================= 00:27:23.353 get_feature(0x05) failed 00:27:23.353 Namespace ID:1 00:27:23.353 Command Set Identifier: NVM (00h) 00:27:23.353 Deallocate: Supported 00:27:23.353 Deallocated/Unwritten Error: Not Supported 00:27:23.353 Deallocated Read Value: Unknown 00:27:23.353 Deallocate in Write Zeroes: Not Supported 00:27:23.353 Deallocated Guard Field: 0xFFFF 00:27:23.353 Flush: Supported 00:27:23.353 Reservation: Not Supported 00:27:23.353 Namespace Sharing Capabilities: Multiple Controllers 00:27:23.353 Size (in LBAs): 3750748848 (1788GiB) 00:27:23.353 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:23.353 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:23.353 UUID: 398c1f41-8fa6-4057-8755-03b95e20e6a0 00:27:23.353 Thin Provisioning: Not Supported 00:27:23.353 Per-NS Atomic Units: Yes 00:27:23.353 Atomic Write Unit (Normal): 8 00:27:23.353 Atomic Write Unit (PFail): 8 00:27:23.353 Preferred Write Granularity: 8 00:27:23.353 Atomic Compare & Write Unit: 8 00:27:23.353 Atomic Boundary Size (Normal): 0 00:27:23.353 Atomic Boundary Size (PFail): 0 00:27:23.353 Atomic Boundary Offset: 0 00:27:23.353 NGUID/EUI64 Never Reused: No 00:27:23.353 ANA group ID: 1 00:27:23.353 Namespace Write Protected: No 00:27:23.353 Number of LBA Formats: 1 00:27:23.353 Current LBA Format: LBA Format #00 00:27:23.353 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:23.353 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.353 rmmod nvme_tcp 00:27:23.353 rmmod nvme_fabrics 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.353 19:17:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:25.900 19:17:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.200 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:29.200 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:29.773 00:27:29.773 real 0m19.725s 00:27:29.773 user 0m5.306s 00:27:29.773 sys 0m11.423s 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.773 ************************************ 00:27:29.773 END TEST nvmf_identify_kernel_target 00:27:29.773 ************************************ 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.773 ************************************ 00:27:29.773 START TEST nvmf_auth_host 00:27:29.773 ************************************ 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:29.773 * Looking for test storage... 00:27:29.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.773 19:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.033 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.034 --rc genhtml_branch_coverage=1 00:27:30.034 --rc genhtml_function_coverage=1 00:27:30.034 --rc genhtml_legend=1 00:27:30.034 --rc geninfo_all_blocks=1 00:27:30.034 --rc geninfo_unexecuted_blocks=1 00:27:30.034 00:27:30.034 ' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.034 --rc genhtml_branch_coverage=1 00:27:30.034 --rc genhtml_function_coverage=1 00:27:30.034 --rc genhtml_legend=1 00:27:30.034 --rc geninfo_all_blocks=1 00:27:30.034 --rc geninfo_unexecuted_blocks=1 00:27:30.034 00:27:30.034 ' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.034 --rc genhtml_branch_coverage=1 00:27:30.034 --rc genhtml_function_coverage=1 00:27:30.034 --rc genhtml_legend=1 00:27:30.034 --rc geninfo_all_blocks=1 00:27:30.034 --rc geninfo_unexecuted_blocks=1 00:27:30.034 00:27:30.034 ' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.034 --rc genhtml_branch_coverage=1 00:27:30.034 --rc genhtml_function_coverage=1 00:27:30.034 --rc genhtml_legend=1 00:27:30.034 --rc geninfo_all_blocks=1 00:27:30.034 --rc geninfo_unexecuted_blocks=1 00:27:30.034 00:27:30.034 ' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.034 19:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:38.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:38.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.174 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:38.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:38.175 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:38.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:27:38.175 00:27:38.175 --- 10.0.0.2 ping statistics --- 00:27:38.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.175 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:27:38.175 00:27:38.175 --- 10.0.0.1 ping statistics --- 00:27:38.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.175 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3096682 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3096682 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3096682 ']' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.175 19:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=491b4a02efbcc6686a875a3bd4b1fd5b 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.b2f 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 491b4a02efbcc6686a875a3bd4b1fd5b 0 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 491b4a02efbcc6686a875a3bd4b1fd5b 0 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=491b4a02efbcc6686a875a3bd4b1fd5b 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.b2f 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.b2f 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.b2f 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2cf09a2b0a1f32773d737491e8516dd2bc7e5b782d194f5affc6e479081b56ec 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.P17 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2cf09a2b0a1f32773d737491e8516dd2bc7e5b782d194f5affc6e479081b56ec 3 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2cf09a2b0a1f32773d737491e8516dd2bc7e5b782d194f5affc6e479081b56ec 3 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2cf09a2b0a1f32773d737491e8516dd2bc7e5b782d194f5affc6e479081b56ec 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:38.437 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.P17 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.P17 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P17 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=965c29823b7131137ae3d193a6c6653453d5c066155bfdc5 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Kod 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 965c29823b7131137ae3d193a6c6653453d5c066155bfdc5 0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 965c29823b7131137ae3d193a6c6653453d5c066155bfdc5 0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=965c29823b7131137ae3d193a6c6653453d5c066155bfdc5 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Kod 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Kod 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Kod 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff7def0a06a000f6c461db349d4b92dc8ecf7d0c0f11ba88 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.At0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff7def0a06a000f6c461db349d4b92dc8ecf7d0c0f11ba88 2 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff7def0a06a000f6c461db349d4b92dc8ecf7d0c0f11ba88 2 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff7def0a06a000f6c461db349d4b92dc8ecf7d0c0f11ba88 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.At0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.At0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.At0 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce5ea174313761c91bb7b95719b4736c 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jAa 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce5ea174313761c91bb7b95719b4736c 1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce5ea174313761c91bb7b95719b4736c 1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce5ea174313761c91bb7b95719b4736c 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jAa 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jAa 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jAa 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=096d88765a61c67c34cf92a00f0938b4 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.CNQ 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 096d88765a61c67c34cf92a00f0938b4 1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 096d88765a61c67c34cf92a00f0938b4 1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=096d88765a61c67c34cf92a00f0938b4 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:38.699 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.CNQ 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.CNQ 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.CNQ 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=35db32a30c2441a2ad3a62cdda9d413a7a22ec945e6a94dd 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dbz 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 35db32a30c2441a2ad3a62cdda9d413a7a22ec945e6a94dd 2 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 35db32a30c2441a2ad3a62cdda9d413a7a22ec945e6a94dd 2 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=35db32a30c2441a2ad3a62cdda9d413a7a22ec945e6a94dd 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dbz 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dbz 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dbz 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b4fa97a9f093b96bf75454fde4e36494 00:27:38.961 19:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HFi 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b4fa97a9f093b96bf75454fde4e36494 0 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b4fa97a9f093b96bf75454fde4e36494 0 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b4fa97a9f093b96bf75454fde4e36494 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HFi 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HFi 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.HFi 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0426e381c65e2afa8ce0e3e046f1722075d8930f46a90c5b24421054afab55f5 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TBD 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0426e381c65e2afa8ce0e3e046f1722075d8930f46a90c5b24421054afab55f5 3 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0426e381c65e2afa8ce0e3e046f1722075d8930f46a90c5b24421054afab55f5 3 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0426e381c65e2afa8ce0e3e046f1722075d8930f46a90c5b24421054afab55f5 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TBD 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TBD 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TBD 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3096682 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3096682 ']' 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.961 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.b2f 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P17 ]] 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P17 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Kod 00:27:39.223 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.At0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.At0 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jAa 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.CNQ ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CNQ 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dbz 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.HFi ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.HFi 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TBD 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:39.224 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:39.485 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.486 19:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.783 Waiting for block devices as requested 00:27:42.783 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:42.783 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.043 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.043 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.043 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.043 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.303 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.303 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:43.303 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.563 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.563 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.823 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.823 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.823 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.823 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:44.101 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:44.101 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:45.044 19:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:45.044 19:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:45.044 No valid GPT data, bailing 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:45.044 00:27:45.044 Discovery Log Number of Records 2, Generation counter 2 00:27:45.044 =====Discovery Log Entry 0====== 00:27:45.044 trtype: tcp 00:27:45.044 adrfam: ipv4 00:27:45.044 subtype: current discovery subsystem 00:27:45.044 treq: not specified, sq flow control disable supported 00:27:45.044 portid: 1 00:27:45.044 trsvcid: 4420 00:27:45.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:45.044 traddr: 10.0.0.1 00:27:45.044 eflags: none 00:27:45.044 sectype: none 00:27:45.044 =====Discovery Log Entry 1====== 00:27:45.044 trtype: tcp 00:27:45.044 adrfam: ipv4 00:27:45.044 subtype: nvme subsystem 00:27:45.044 treq: not specified, sq flow control disable supported 00:27:45.044 portid: 1 00:27:45.044 trsvcid: 4420 00:27:45.044 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:45.044 traddr: 10.0.0.1 00:27:45.044 eflags: none 00:27:45.044 sectype: none 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.044 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.304 nvme0n1 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:45.304 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.305 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.566 nvme0n1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.566 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.828 nvme0n1 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.828 19:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.090 nvme0n1 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.090 nvme0n1 00:27:46.090 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.350 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 nvme0n1 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.351 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.610 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 nvme0n1 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.611 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.871 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.872 19:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.872 nvme0n1 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.872 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.132 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.133 nvme0n1 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.133 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.395 nvme0n1 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.395 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.656 nvme0n1 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.656 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.917 19:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 nvme0n1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.178 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.438 nvme0n1 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.439 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 nvme0n1 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.958 19:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.218 nvme0n1 00:27:49.218 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.218 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.219 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.480 nvme0n1 00:27:49.480 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.481 19:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.067 nvme0n1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.067 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.388 nvme0n1 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.388 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.716 19:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.978 nvme0n1 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.978 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.550 nvme0n1 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.551 19:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.124 nvme0n1 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.124 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.125 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 nvme0n1 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.697 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.698 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.698 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.698 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.698 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.698 19:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.640 nvme0n1 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.640 19:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.212 nvme0n1 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.212 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.783 nvme0n1 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.783 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.045 19:18:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.045 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.617 nvme0n1 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.617 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.877 nvme0n1 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.877 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.878 19:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.137 nvme0n1 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:56.137 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.138 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 nvme0n1 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 nvme0n1 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.399 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.659 nvme0n1 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.659 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.919 19:18:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.919 nvme0n1 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.919 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.180 nvme0n1 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.180 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.440 nvme0n1 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.440 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.441 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.441 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.441 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.700 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.700 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.700 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.700 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.701 nvme0n1 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.701 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.961 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.962 19:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.962 nvme0n1 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.962 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.222 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.483 nvme0n1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.483 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.743 nvme0n1 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.743 19:18:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.003 nvme0n1 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.003 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.263 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.524 nvme0n1 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.524 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.784 nvme0n1 00:27:59.784 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.784 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.784 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.784 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.785 19:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.356 nvme0n1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.356 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.940 nvme0n1 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.940 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.941 19:18:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.201 nvme0n1 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.201 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.462 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.463 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.724 nvme0n1 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.724 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.985 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.985 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.985 19:18:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.244 nvme0n1 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.244 19:18:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.185 nvme0n1 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:03.185 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.186 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.758 nvme0n1 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.758 19:18:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.329 nvme0n1 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.329 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.590 19:18:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.162 nvme0n1 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.162 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 nvme0n1 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 19:18:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 nvme0n1 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.105 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.106 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.366 nvme0n1 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.366 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.367 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.628 nvme0n1 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.628 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.889 nvme0n1 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.889 19:18:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.150 nvme0n1 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.150 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.411 nvme0n1 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.411 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.412 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 nvme0n1 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.674 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.936 nvme0n1 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.936 19:18:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.936 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.197 nvme0n1 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.197 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.459 nvme0n1 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.459 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.721 nvme0n1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.721 19:18:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.981 nvme0n1 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.981 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.242 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.503 nvme0n1 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.503 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 nvme0n1 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.764 19:18:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.026 nvme0n1 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.026 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.286 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.547 nvme0n1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.547 19:18:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.118 nvme0n1 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.118 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.119 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 nvme0n1 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.690 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.691 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.691 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.691 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.691 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.691 19:18:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.262 nvme0n1 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.262 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.263 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.523 nvme0n1 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.523 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkxYjRhMDJlZmJjYzY2ODZhODc1YTNiZDRiMWZkNWJd/3Lg: 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: ]] 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmNmMDlhMmIwYTFmMzI3NzNkNzM3NDkxZTg1MTZkZDJiYzdlNWI3ODJkMTk0ZjVhZmZjNmU0NzkwODFiNTZlY33Gw0M=: 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.524 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.784 19:18:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.355 nvme0n1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.355 19:18:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.927 nvme0n1 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.927 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.189 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.761 nvme0n1 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzVkYjMyYTMwYzI0NDFhMmFkM2E2MmNkZGE5ZDQxM2E3YTIyZWM5NDVlNmE5NGRkruratg==: 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjRmYTk3YTlmMDkzYjk2YmY3NTQ1NGZkZTRlMzY0OTTmFP77: 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.761 19:18:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.703 nvme0n1 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQyNmUzODFjNjVlMmFmYThjZTBlM2UwNDZmMTcyMjA3NWQ4OTMwZjQ2YTkwYzViMjQ0MjEwNTRhZmFiNTVmNWnS54U=: 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.703 19:18:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.274 nvme0n1 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.274 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.275 request: 00:28:16.275 { 00:28:16.275 "name": "nvme0", 00:28:16.275 "trtype": "tcp", 00:28:16.275 "traddr": "10.0.0.1", 00:28:16.275 "adrfam": "ipv4", 00:28:16.275 "trsvcid": "4420", 00:28:16.275 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:16.275 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:16.275 "prchk_reftag": false, 00:28:16.275 "prchk_guard": false, 00:28:16.275 "hdgst": false, 00:28:16.275 "ddgst": false, 00:28:16.275 "allow_unrecognized_csi": false, 00:28:16.275 "method": "bdev_nvme_attach_controller", 00:28:16.275 "req_id": 1 00:28:16.275 } 00:28:16.275 Got JSON-RPC error response 00:28:16.275 response: 00:28:16.275 { 00:28:16.275 "code": -5, 00:28:16.275 "message": "Input/output error" 00:28:16.275 } 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:16.275 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.536 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.536 request: 00:28:16.536 { 00:28:16.536 "name": "nvme0", 00:28:16.537 "trtype": "tcp", 00:28:16.537 "traddr": "10.0.0.1", 00:28:16.537 "adrfam": "ipv4", 00:28:16.537 "trsvcid": "4420", 00:28:16.537 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:16.537 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:16.537 "prchk_reftag": false, 00:28:16.537 "prchk_guard": false, 00:28:16.537 "hdgst": false, 00:28:16.537 "ddgst": false, 00:28:16.537 "dhchap_key": "key2", 00:28:16.537 "allow_unrecognized_csi": false, 00:28:16.537 "method": "bdev_nvme_attach_controller", 00:28:16.537 "req_id": 1 00:28:16.537 } 00:28:16.537 Got JSON-RPC error response 00:28:16.537 response: 00:28:16.537 { 00:28:16.537 "code": -5, 00:28:16.537 "message": "Input/output error" 00:28:16.537 } 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.537 request: 00:28:16.537 { 00:28:16.537 "name": "nvme0", 00:28:16.537 "trtype": "tcp", 00:28:16.537 "traddr": "10.0.0.1", 00:28:16.537 "adrfam": "ipv4", 00:28:16.537 "trsvcid": "4420", 00:28:16.537 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:16.537 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:16.537 "prchk_reftag": false, 00:28:16.537 "prchk_guard": false, 00:28:16.537 "hdgst": false, 00:28:16.537 "ddgst": false, 00:28:16.537 "dhchap_key": "key1", 00:28:16.537 "dhchap_ctrlr_key": "ckey2", 00:28:16.537 "allow_unrecognized_csi": false, 00:28:16.537 "method": "bdev_nvme_attach_controller", 00:28:16.537 "req_id": 1 00:28:16.537 } 00:28:16.537 Got JSON-RPC error response 00:28:16.537 response: 00:28:16.537 { 00:28:16.537 "code": -5, 00:28:16.537 "message": "Input/output error" 00:28:16.537 } 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.537 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 nvme0n1 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.798 19:18:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.058 request: 00:28:17.058 { 00:28:17.058 "name": "nvme0", 00:28:17.058 "dhchap_key": "key1", 00:28:17.058 "dhchap_ctrlr_key": "ckey2", 00:28:17.058 "method": "bdev_nvme_set_keys", 00:28:17.058 "req_id": 1 00:28:17.058 } 00:28:17.058 Got JSON-RPC error response 00:28:17.058 response: 00:28:17.058 { 00:28:17.058 "code": -13, 00:28:17.059 "message": "Permission denied" 00:28:17.059 } 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:17.059 19:18:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.000 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:18.001 19:18:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:18.942 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.942 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:18.942 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.942 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.942 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.202 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTY1YzI5ODIzYjcxMzExMzdhZTNkMTkzYTZjNjY1MzQ1M2Q1YzA2NjE1NWJmZGM19z00vw==: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmY3ZGVmMGEwNmEwMDBmNmM0NjFkYjM0OWQ0YjkyZGM4ZWNmN2QwYzBmMTFiYTg4o2OOFA==: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.203 nvme0n1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U1ZWExNzQzMTM3NjFjOTFiYjdiOTU3MTliNDczNmNNbQ/1: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: ]] 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk2ZDg4NzY1YTYxYzY3YzM0Y2Y5MmEwMGYwOTM4YjQtO5AS: 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.203 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.203 request: 00:28:19.203 { 00:28:19.203 "name": "nvme0", 00:28:19.203 "dhchap_key": "key2", 00:28:19.203 "dhchap_ctrlr_key": "ckey1", 00:28:19.203 "method": "bdev_nvme_set_keys", 00:28:19.203 "req_id": 1 00:28:19.203 } 00:28:19.203 Got JSON-RPC error response 00:28:19.203 response: 00:28:19.203 { 00:28:19.203 "code": -13, 00:28:19.203 "message": "Permission denied" 00:28:19.203 } 00:28:19.464 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.464 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.464 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:19.465 19:18:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.405 rmmod nvme_tcp 00:28:20.405 rmmod nvme_fabrics 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3096682 ']' 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3096682 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3096682 ']' 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3096682 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.405 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096682 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096682' 00:28:20.666 killing process with pid 3096682 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3096682 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3096682 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.666 19:18:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:23.210 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:23.211 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.211 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:23.211 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:23.211 19:18:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:26.511 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.511 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.512 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:26.772 19:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.b2f /tmp/spdk.key-null.Kod /tmp/spdk.key-sha256.jAa /tmp/spdk.key-sha384.dbz /tmp/spdk.key-sha512.TBD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:26.772 19:18:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:30.978 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:30.978 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:30.979 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:30.979 00:28:30.979 real 1m0.937s 00:28:30.979 user 0m54.524s 00:28:30.979 sys 0m16.312s 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 ************************************ 00:28:30.979 END TEST nvmf_auth_host 00:28:30.979 ************************************ 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 ************************************ 00:28:30.979 START TEST nvmf_digest 00:28:30.979 ************************************ 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.979 * Looking for test storage... 00:28:30.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.979 19:18:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.979 --rc genhtml_branch_coverage=1 00:28:30.979 --rc genhtml_function_coverage=1 00:28:30.979 --rc genhtml_legend=1 00:28:30.979 --rc geninfo_all_blocks=1 00:28:30.979 --rc geninfo_unexecuted_blocks=1 00:28:30.979 00:28:30.979 ' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.979 --rc genhtml_branch_coverage=1 00:28:30.979 --rc genhtml_function_coverage=1 00:28:30.979 --rc genhtml_legend=1 00:28:30.979 --rc geninfo_all_blocks=1 00:28:30.979 --rc geninfo_unexecuted_blocks=1 00:28:30.979 00:28:30.979 ' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.979 --rc genhtml_branch_coverage=1 00:28:30.979 --rc genhtml_function_coverage=1 00:28:30.979 --rc genhtml_legend=1 00:28:30.979 --rc geninfo_all_blocks=1 00:28:30.979 --rc geninfo_unexecuted_blocks=1 00:28:30.979 00:28:30.979 ' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:30.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:30.979 --rc genhtml_branch_coverage=1 00:28:30.979 --rc genhtml_function_coverage=1 00:28:30.979 --rc genhtml_legend=1 00:28:30.979 --rc geninfo_all_blocks=1 00:28:30.979 --rc geninfo_unexecuted_blocks=1 00:28:30.979 00:28:30.979 ' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.979 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:30.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.980 19:18:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:39.264 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:39.264 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.264 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:39.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:39.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:28:39.265 00:28:39.265 --- 10.0.0.2 ping statistics --- 00:28:39.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.265 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:28:39.265 00:28:39.265 --- 10.0.0.1 ping statistics --- 00:28:39.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.265 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.265 ************************************ 00:28:39.265 START TEST nvmf_digest_clean 00:28:39.265 ************************************ 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3114241 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3114241 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3114241 ']' 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.265 19:18:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.265 [2024-11-26 19:18:55.790188] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:39.265 [2024-11-26 19:18:55.790254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.265 [2024-11-26 19:18:55.873748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.265 [2024-11-26 19:18:55.924806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.265 [2024-11-26 19:18:55.924855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.265 [2024-11-26 19:18:55.924864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.265 [2024-11-26 19:18:55.924871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.265 [2024-11-26 19:18:55.924878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.265 [2024-11-26 19:18:55.925635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.525 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.785 null0 00:28:39.785 [2024-11-26 19:18:56.746095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.785 [2024-11-26 19:18:56.770427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.785 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3114527 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3114527 /var/tmp/bperf.sock 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3114527 ']' 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.786 19:18:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 [2024-11-26 19:18:56.831691] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:39.786 [2024-11-26 19:18:56.831758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114527 ] 00:28:39.786 [2024-11-26 19:18:56.923899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.786 [2024-11-26 19:18:56.976253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.728 19:18:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.988 nvme0n1 00:28:41.249 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:41.249 19:18:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.249 Running I/O for 2 seconds... 00:28:43.132 18582.00 IOPS, 72.59 MiB/s [2024-11-26T18:19:00.345Z] 19244.50 IOPS, 75.17 MiB/s 00:28:43.132 Latency(us) 00:28:43.132 [2024-11-26T18:19:00.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:43.132 nvme0n1 : 2.01 19252.01 75.20 0.00 0.00 6639.67 2880.85 24794.45 00:28:43.132 [2024-11-26T18:19:00.345Z] =================================================================================================================== 00:28:43.132 [2024-11-26T18:19:00.345Z] Total : 19252.01 75.20 0.00 0.00 6639.67 2880.85 24794.45 00:28:43.132 { 00:28:43.132 "results": [ 00:28:43.132 { 00:28:43.132 "job": "nvme0n1", 00:28:43.132 "core_mask": "0x2", 00:28:43.132 "workload": "randread", 00:28:43.132 "status": "finished", 00:28:43.132 "queue_depth": 128, 00:28:43.132 "io_size": 4096, 00:28:43.132 "runtime": 2.006284, 00:28:43.132 "iops": 19252.010184001865, 00:28:43.132 "mibps": 75.20316478125729, 00:28:43.132 "io_failed": 0, 00:28:43.132 "io_timeout": 0, 00:28:43.132 "avg_latency_us": 6639.6670784897515, 00:28:43.132 "min_latency_us": 2880.8533333333335, 00:28:43.132 "max_latency_us": 24794.453333333335 00:28:43.132 } 00:28:43.132 ], 00:28:43.132 "core_count": 1 00:28:43.132 } 00:28:43.132 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:43.132 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:43.132 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:43.132 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:43.132 | select(.opcode=="crc32c") 00:28:43.132 | "\(.module_name) \(.executed)"' 00:28:43.132 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:43.393 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:43.393 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3114527 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3114527 ']' 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3114527 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114527 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114527' 00:28:43.394 killing process with pid 3114527 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3114527 00:28:43.394 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.394 00:28:43.394 Latency(us) 00:28:43.394 [2024-11-26T18:19:00.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.394 [2024-11-26T18:19:00.607Z] =================================================================================================================== 00:28:43.394 [2024-11-26T18:19:00.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.394 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3114527 00:28:43.653 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3115274 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3115274 /var/tmp/bperf.sock 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3115274 ']' 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.654 19:19:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.654 [2024-11-26 19:19:00.749721] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:43.654 [2024-11-26 19:19:00.749792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115274 ] 00:28:43.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.654 Zero copy mechanism will not be used. 00:28:43.654 [2024-11-26 19:19:00.834250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.914 [2024-11-26 19:19:00.864322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.484 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.484 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:44.484 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:44.484 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:44.484 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.744 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.744 19:19:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.004 nvme0n1 00:28:45.004 19:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:45.004 19:19:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.004 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.004 Zero copy mechanism will not be used. 00:28:45.004 Running I/O for 2 seconds... 00:28:47.329 3146.00 IOPS, 393.25 MiB/s [2024-11-26T18:19:04.542Z] 3746.00 IOPS, 468.25 MiB/s 00:28:47.329 Latency(us) 00:28:47.329 [2024-11-26T18:19:04.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:47.329 nvme0n1 : 2.00 3749.48 468.69 0.00 0.00 4263.78 498.35 9175.04 00:28:47.329 [2024-11-26T18:19:04.542Z] =================================================================================================================== 00:28:47.329 [2024-11-26T18:19:04.542Z] Total : 3749.48 468.69 0.00 0.00 4263.78 498.35 9175.04 00:28:47.329 { 00:28:47.329 "results": [ 00:28:47.329 { 00:28:47.329 "job": "nvme0n1", 00:28:47.329 "core_mask": "0x2", 00:28:47.329 "workload": "randread", 00:28:47.329 "status": "finished", 00:28:47.329 "queue_depth": 16, 00:28:47.329 "io_size": 131072, 00:28:47.329 "runtime": 2.002411, 00:28:47.329 "iops": 3749.4800018577603, 00:28:47.329 "mibps": 468.68500023222003, 00:28:47.329 "io_failed": 0, 00:28:47.329 "io_timeout": 0, 00:28:47.329 "avg_latency_us": 4263.780245071924, 00:28:47.329 "min_latency_us": 498.3466666666667, 00:28:47.329 "max_latency_us": 9175.04 00:28:47.329 } 00:28:47.329 ], 00:28:47.329 "core_count": 1 00:28:47.329 } 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:47.329 | select(.opcode=="crc32c") 00:28:47.329 | "\(.module_name) \(.executed)"' 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3115274 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3115274 ']' 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3115274 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115274 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115274' 00:28:47.329 killing process with pid 3115274 00:28:47.329 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3115274 00:28:47.329 Received shutdown signal, test time was about 2.000000 seconds 00:28:47.329 00:28:47.329 Latency(us) 00:28:47.329 [2024-11-26T18:19:04.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.329 [2024-11-26T18:19:04.542Z] =================================================================================================================== 00:28:47.330 [2024-11-26T18:19:04.543Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3115274 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3115952 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3115952 /var/tmp/bperf.sock 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3115952 ']' 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.330 19:19:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:47.589 [2024-11-26 19:19:04.581955] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:47.589 [2024-11-26 19:19:04.582010] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115952 ] 00:28:47.589 [2024-11-26 19:19:04.663026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.589 [2024-11-26 19:19:04.691323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.527 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.786 nvme0n1 00:28:48.786 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.786 19:19:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.786 Running I/O for 2 seconds... 00:28:50.745 30330.00 IOPS, 118.48 MiB/s [2024-11-26T18:19:07.958Z] 30405.00 IOPS, 118.77 MiB/s 00:28:50.745 Latency(us) 00:28:50.745 [2024-11-26T18:19:07.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.745 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.745 nvme0n1 : 2.00 30401.11 118.75 0.00 0.00 4203.94 2102.61 13981.01 00:28:50.745 [2024-11-26T18:19:07.958Z] =================================================================================================================== 00:28:50.745 [2024-11-26T18:19:07.958Z] Total : 30401.11 118.75 0.00 0.00 4203.94 2102.61 13981.01 00:28:50.745 { 00:28:50.745 "results": [ 00:28:50.745 { 00:28:50.745 "job": "nvme0n1", 00:28:50.745 "core_mask": "0x2", 00:28:50.745 "workload": "randwrite", 00:28:50.745 "status": "finished", 00:28:50.745 "queue_depth": 128, 00:28:50.745 "io_size": 4096, 00:28:50.745 "runtime": 2.004203, 00:28:50.745 "iops": 30401.112062999608, 00:28:50.745 "mibps": 118.75434399609222, 00:28:50.745 "io_failed": 0, 00:28:50.745 "io_timeout": 0, 00:28:50.745 "avg_latency_us": 4203.941778871929, 00:28:50.745 "min_latency_us": 2102.6133333333332, 00:28:50.745 "max_latency_us": 13981.013333333334 00:28:50.745 } 00:28:50.745 ], 00:28:50.745 "core_count": 1 00:28:50.745 } 00:28:51.005 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:51.005 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:51.005 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:51.005 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:51.005 | select(.opcode=="crc32c") 00:28:51.005 | "\(.module_name) \(.executed)"' 00:28:51.005 19:19:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3115952 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3115952 ']' 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3115952 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.005 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115952 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115952' 00:28:51.265 killing process with pid 3115952 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3115952 00:28:51.265 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.265 00:28:51.265 Latency(us) 00:28:51.265 [2024-11-26T18:19:08.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.265 [2024-11-26T18:19:08.478Z] =================================================================================================================== 00:28:51.265 [2024-11-26T18:19:08.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3115952 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3116643 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3116643 /var/tmp/bperf.sock 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3116643 ']' 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.265 19:19:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:51.265 [2024-11-26 19:19:08.386512] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:51.265 [2024-11-26 19:19:08.386569] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116643 ] 00:28:51.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.265 Zero copy mechanism will not be used. 00:28:51.265 [2024-11-26 19:19:08.468274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.524 [2024-11-26 19:19:08.497494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.095 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.095 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:52.095 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:52.095 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:52.095 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:52.355 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.355 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.615 nvme0n1 00:28:52.873 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:52.873 19:19:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.873 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.873 Zero copy mechanism will not be used. 00:28:52.873 Running I/O for 2 seconds... 00:28:54.751 3826.00 IOPS, 478.25 MiB/s [2024-11-26T18:19:11.964Z] 5158.50 IOPS, 644.81 MiB/s 00:28:54.751 Latency(us) 00:28:54.751 [2024-11-26T18:19:11.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.751 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:54.751 nvme0n1 : 2.00 5162.32 645.29 0.00 0.00 3096.06 976.21 13107.20 00:28:54.751 [2024-11-26T18:19:11.964Z] =================================================================================================================== 00:28:54.751 [2024-11-26T18:19:11.964Z] Total : 5162.32 645.29 0.00 0.00 3096.06 976.21 13107.20 00:28:54.751 { 00:28:54.751 "results": [ 00:28:54.751 { 00:28:54.751 "job": "nvme0n1", 00:28:54.751 "core_mask": "0x2", 00:28:54.751 "workload": "randwrite", 00:28:54.751 "status": "finished", 00:28:54.751 "queue_depth": 16, 00:28:54.751 "io_size": 131072, 00:28:54.751 "runtime": 2.002395, 00:28:54.751 "iops": 5162.318124046455, 00:28:54.751 "mibps": 645.2897655058068, 00:28:54.751 "io_failed": 0, 00:28:54.751 "io_timeout": 0, 00:28:54.751 "avg_latency_us": 3096.057445422592, 00:28:54.751 "min_latency_us": 976.2133333333334, 00:28:54.751 "max_latency_us": 13107.2 00:28:54.751 } 00:28:54.751 ], 00:28:54.751 "core_count": 1 00:28:54.751 } 00:28:55.011 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:55.011 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:55.011 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:55.011 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:55.011 | select(.opcode=="crc32c") 00:28:55.011 | "\(.module_name) \(.executed)"' 00:28:55.011 19:19:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3116643 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3116643 ']' 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3116643 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.011 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116643 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116643' 00:28:55.271 killing process with pid 3116643 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3116643 00:28:55.271 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.271 00:28:55.271 Latency(us) 00:28:55.271 [2024-11-26T18:19:12.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.271 [2024-11-26T18:19:12.484Z] =================================================================================================================== 00:28:55.271 [2024-11-26T18:19:12.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3116643 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3114241 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3114241 ']' 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3114241 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114241 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114241' 00:28:55.271 killing process with pid 3114241 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3114241 00:28:55.271 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3114241 00:28:55.532 00:28:55.532 real 0m16.777s 00:28:55.532 user 0m33.172s 00:28:55.532 sys 0m3.776s 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 ************************************ 00:28:55.532 END TEST nvmf_digest_clean 00:28:55.532 ************************************ 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 ************************************ 00:28:55.532 START TEST nvmf_digest_error 00:28:55.532 ************************************ 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3117561 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3117561 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3117561 ']' 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.532 19:19:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 [2024-11-26 19:19:12.647016] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:55.532 [2024-11-26 19:19:12.647073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.532 [2024-11-26 19:19:12.739380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.791 [2024-11-26 19:19:12.770402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.791 [2024-11-26 19:19:12.770431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.791 [2024-11-26 19:19:12.770437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.791 [2024-11-26 19:19:12.770441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.791 [2024-11-26 19:19:12.770448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.791 [2024-11-26 19:19:12.770915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.361 [2024-11-26 19:19:13.472837] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.361 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.361 null0 00:28:56.361 [2024-11-26 19:19:13.551771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.622 [2024-11-26 19:19:13.575963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3117700 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3117700 /var/tmp/bperf.sock 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3117700 ']' 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.622 19:19:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.622 [2024-11-26 19:19:13.633566] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:28:56.622 [2024-11-26 19:19:13.633613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117700 ] 00:28:56.622 [2024-11-26 19:19:13.715878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.622 [2024-11-26 19:19:13.745999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.221 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.221 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:57.221 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.221 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.481 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.741 nvme0n1 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.741 19:19:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.000 Running I/O for 2 seconds... 00:28:58.000 [2024-11-26 19:19:14.989983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:14.990014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:14.990023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.001900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.001920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.001927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.012610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.012627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.012639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.021565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.021583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.021590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.031752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.031770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.031776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.040392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.040409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.040416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.048734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.000 [2024-11-26 19:19:15.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.000 [2024-11-26 19:19:15.048758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.000 [2024-11-26 19:19:15.057803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.057821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.066562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.066579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.066586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.075615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.075632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.075638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.084815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.084832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.093748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.093766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.093772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.102845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.102862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.102869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.110837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.110854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.110861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.120983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.121001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.121008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.129441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.129459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.129466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.140982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.141000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.141006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.149418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.149435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.149441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.158655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.158672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.158678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.167483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.167511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.176253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.176270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.176277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.185378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.185396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.185403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.193881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.193898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.193904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.001 [2024-11-26 19:19:15.203353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.001 [2024-11-26 19:19:15.203370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.001 [2024-11-26 19:19:15.203376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.212871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.212888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.212895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.221418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.221434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.231638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.231655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.231662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.242993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.243010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.243017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.253830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.253850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.253857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.262508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.262525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.262532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.271992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.262 [2024-11-26 19:19:15.272009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.262 [2024-11-26 19:19:15.272016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.262 [2024-11-26 19:19:15.281269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.281285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.281292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.290436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.290454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.290460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.299103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.299120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.299126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.309856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.309872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.309879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.318062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.318079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.318085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.327662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.327678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.327685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.336238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.336255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.336261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.345779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.345796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.345802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.353709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.353732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.363735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.363752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.363758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.374085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.374108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.381761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.381778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.381784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.391142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.391163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.391170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.400474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.400491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.400497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.411387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.411404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.411413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.419544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.419562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.419568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.430512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.430529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.430536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.440130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.440148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.440154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.448224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.448242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.448248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.457661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.457677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.457684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.263 [2024-11-26 19:19:15.467326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.263 [2024-11-26 19:19:15.467343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.263 [2024-11-26 19:19:15.467349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.476498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.476515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.476522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.485398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.485415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.485421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.493818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.493838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.493845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.503325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.503342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.503348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.512191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.512208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.512214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.520648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.520665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.520672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.529804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.529821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.529827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.538624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.538641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.538648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.547957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.547974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.547981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.556877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.556894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.556900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.564997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.565014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.565023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.574178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.574195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.574202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.583817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.583833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.583840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.594590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.594607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.594614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.603621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.603638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.603644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.612023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.612040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.612047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.620754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.620771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.620777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.630193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.630210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.630216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.638899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.638916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.648257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.648277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.648284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.656463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.656480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.656486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.665287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.665304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.665311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.674808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.674825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.674832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.683312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.683329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.683335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.692679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.524 [2024-11-26 19:19:15.692697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.524 [2024-11-26 19:19:15.692703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.524 [2024-11-26 19:19:15.700949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.525 [2024-11-26 19:19:15.700967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.525 [2024-11-26 19:19:15.700973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.525 [2024-11-26 19:19:15.709994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.525 [2024-11-26 19:19:15.710012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.525 [2024-11-26 19:19:15.710020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.525 [2024-11-26 19:19:15.719762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.525 [2024-11-26 19:19:15.719779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.525 [2024-11-26 19:19:15.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.525 [2024-11-26 19:19:15.729947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.525 [2024-11-26 19:19:15.729964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.525 [2024-11-26 19:19:15.729970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.738038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.738055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.738062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.746970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.746988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.756755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.756780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.765690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.765707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.765713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.773898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.773915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.773921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.783237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.783254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.783260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.792167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.792184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.792190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.800689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.800715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.809976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.809993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.819128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.819152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.828909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.828926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.828932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.836887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.836904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.836910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.846838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.846855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.846861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.856503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.856520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.866156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.866179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.866186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.874136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.874154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.883952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.883972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.883978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.894230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.785 [2024-11-26 19:19:15.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.785 [2024-11-26 19:19:15.905063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.785 [2024-11-26 19:19:15.905080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.905087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.913171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.913188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.913194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.922689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.922706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.922713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.931364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.931381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.931388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.942006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.942024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.942030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.952469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.952486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.952492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.961304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.961327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.961333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.969859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.969876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.969882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 27369.00 IOPS, 106.91 MiB/s [2024-11-26T18:19:15.999Z] [2024-11-26 19:19:15.979777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.979794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.979800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.786 [2024-11-26 19:19:15.989899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:58.786 [2024-11-26 19:19:15.989916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.786 [2024-11-26 19:19:15.989923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:15.997673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:15.997691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:15.997698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.007322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.007339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.007345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.016726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.016743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.016749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.025361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.025378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.025384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.033557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.033575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.033581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.044034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.044055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.044061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.053573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.053589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.053595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.062379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.062396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.062402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.074014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.074031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.074037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.083953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.083970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.083976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.092271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.092288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.092294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.101237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.101260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.110595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.110612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.119855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.119873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.119879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.128474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.128491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.128498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.138308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.138325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.138331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.147901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.147919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.147926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.157679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.157697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.157703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.165693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.165711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.165717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.175757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.175774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.175780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.184088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.184105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.184111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.193720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.193737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.193743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.202390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.202407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.202417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.211071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.211088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.211095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.219376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.219399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.228970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.228987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.228993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.237932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.237949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.237955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.047 [2024-11-26 19:19:16.248236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.047 [2024-11-26 19:19:16.248253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.047 [2024-11-26 19:19:16.248259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.255888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.255907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.255913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.266116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.266133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.266139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.277796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.277813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.277819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.287191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.287214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.287220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.294730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.294748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.294754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.305328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.305345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.305351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.315688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.315705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.315712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.322989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.323006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.323012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.333055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.333073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.333079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.341865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.341882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.341888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.351555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.351571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.351578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.359693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.359710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.359716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.369236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.369256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-11-26 19:19:16.378302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.308 [2024-11-26 19:19:16.378320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-11-26 19:19:16.378326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.386730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.386748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.386754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.396004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.396021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.396028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.405404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.405421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.405427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.414853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.414870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.414877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.424958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.424975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.424982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.433054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.433071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.443577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.443598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.443604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.454950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.454967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.454974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.467023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.467040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.467046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.477072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.477089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.477095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.485820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.485842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.494375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.494391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.494398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.503441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.503458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.503464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.309 [2024-11-26 19:19:16.511939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.309 [2024-11-26 19:19:16.511956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.309 [2024-11-26 19:19:16.511962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.521222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.530401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.530417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.530424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.539274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.539291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.539298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.548862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.548879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.548886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.557794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.557810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.557817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.566234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.566251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.575804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.575821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.575827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.584971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.584988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.584995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.594607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.594624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.594630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.603484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.603501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.603510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.612752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.612769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.612775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.621387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.621404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.621410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.630589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.630606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.630612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.638844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.638861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.648123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.648140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.648146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.657417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.657434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.657440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.569 [2024-11-26 19:19:16.665565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.569 [2024-11-26 19:19:16.665583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.569 [2024-11-26 19:19:16.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.675307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.675324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.675331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.683724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.683744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.683750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.693635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.693652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.693658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.702770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.702787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.702793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.711408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.711424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.711430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.721109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.721126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.721132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.728951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.728968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.728975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.738047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.738064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.738071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.747334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.747351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.747357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.756198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.756221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.765122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.765139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.765145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-11-26 19:19:16.774315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.570 [2024-11-26 19:19:16.774332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-11-26 19:19:16.774339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.782923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.782940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.782947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.793082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.793099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.793106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.802044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.802060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.802067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.812254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.812271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.812277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.821250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.821268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.821274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.829646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.829664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.829671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.842229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.842247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.842257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.853875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.853892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.864396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.864412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.864419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.873524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.873541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.873547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.882527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.882544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.882550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.890780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.890797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.890803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.899583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.899600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.899606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.909805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.909822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.909829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.918131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.918148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.918154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.927275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.927291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.927298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.935577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.935594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.935600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.943900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.943917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.953607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.953624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.830 [2024-11-26 19:19:16.963193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.830 [2024-11-26 19:19:16.963210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.830 [2024-11-26 19:19:16.963216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.831 [2024-11-26 19:19:16.972995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1be0190) 00:28:59.831 [2024-11-26 19:19:16.973012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.831 [2024-11-26 19:19:16.973018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.831 27452.50 IOPS, 107.24 MiB/s 00:28:59.831 Latency(us) 00:28:59.831 [2024-11-26T18:19:17.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.831 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:59.831 nvme0n1 : 2.00 27472.90 107.32 0.00 0.00 4654.78 2225.49 15947.09 00:28:59.831 [2024-11-26T18:19:17.044Z] =================================================================================================================== 00:28:59.831 [2024-11-26T18:19:17.044Z] Total : 27472.90 107.32 0.00 0.00 4654.78 2225.49 15947.09 00:28:59.831 { 00:28:59.831 "results": [ 00:28:59.831 { 00:28:59.831 "job": "nvme0n1", 00:28:59.831 "core_mask": "0x2", 00:28:59.831 "workload": "randread", 00:28:59.831 "status": "finished", 00:28:59.831 "queue_depth": 128, 00:28:59.831 "io_size": 4096, 00:28:59.831 "runtime": 2.003174, 00:28:59.831 "iops": 27472.900506895556, 00:28:59.831 "mibps": 107.31601760506076, 00:28:59.831 "io_failed": 0, 00:28:59.831 "io_timeout": 0, 00:28:59.831 "avg_latency_us": 4654.775809423437, 00:28:59.831 "min_latency_us": 2225.4933333333333, 00:28:59.831 "max_latency_us": 15947.093333333334 00:28:59.831 } 00:28:59.831 ], 00:28:59.831 "core_count": 1 00:28:59.831 } 00:28:59.831 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.831 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.831 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.831 | .driver_specific 00:28:59.831 | .nvme_error 00:28:59.831 | .status_code 00:28:59.831 | .command_transient_transport_error' 00:28:59.831 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3117700 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3117700 ']' 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3117700 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3117700 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3117700' 00:29:00.091 killing process with pid 3117700 00:29:00.091 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3117700 00:29:00.091 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.092 00:29:00.092 Latency(us) 00:29:00.092 [2024-11-26T18:19:17.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.092 [2024-11-26T18:19:17.305Z] =================================================================================================================== 00:29:00.092 [2024-11-26T18:19:17.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.092 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3117700 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3118387 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3118387 /var/tmp/bperf.sock 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3118387 ']' 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.352 19:19:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:00.352 [2024-11-26 19:19:17.412346] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:00.352 [2024-11-26 19:19:17.412401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118387 ] 00:29:00.352 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.352 Zero copy mechanism will not be used. 00:29:00.352 [2024-11-26 19:19:17.495332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.352 [2024-11-26 19:19:17.524182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.292 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.552 nvme0n1 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.552 19:19:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.813 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.813 Zero copy mechanism will not be used. 00:29:01.813 Running I/O for 2 seconds... 00:29:01.813 [2024-11-26 19:19:18.787998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.788033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.788042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.796133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.796156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.796174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.803272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.803292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.803300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.808880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.808900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.808906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.814756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.819660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.819679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.819685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.813 [2024-11-26 19:19:18.824175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.813 [2024-11-26 19:19:18.824194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.813 [2024-11-26 19:19:18.824200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.829060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.829078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.829084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.834327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.834345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.834351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.839358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.839377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.839383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.844370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.844392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.844398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.849574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.849592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.849598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.854693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.854711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.854718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.860390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.860408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.860415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.865931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.865949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.865955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.871252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.871270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.871276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.876501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.876520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.876526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.882553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.882571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.882577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.888066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.888083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.888089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.893099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.893117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.893123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.900005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.900023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.900030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.911093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.911118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.919727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.919745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.919752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.926994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.927012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.927018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.933040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.933058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.933064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.943022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.943039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.943046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.949668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.949687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.949693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.957271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.957289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.957299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.963722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.963739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.963746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.972895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.972914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.972920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.980098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.980117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.980126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.983136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.983155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.983166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.989297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.989316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.989322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.814 [2024-11-26 19:19:18.995501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.814 [2024-11-26 19:19:18.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.814 [2024-11-26 19:19:18.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:01.815 [2024-11-26 19:19:19.002774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.815 [2024-11-26 19:19:19.002793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.815 [2024-11-26 19:19:19.002800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:01.815 [2024-11-26 19:19:19.009510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.815 [2024-11-26 19:19:19.009528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.815 [2024-11-26 19:19:19.009534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:01.815 [2024-11-26 19:19:19.015903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.815 [2024-11-26 19:19:19.015921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.815 [2024-11-26 19:19:19.015928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:01.815 [2024-11-26 19:19:19.021795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:01.815 [2024-11-26 19:19:19.021814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.815 [2024-11-26 19:19:19.021821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.030071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.030090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.030096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.035922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.035939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.035945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.041727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.041745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.041751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.047217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.047235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.053110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.053133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.060921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.060939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.060946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.066677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.075 [2024-11-26 19:19:19.066695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 19:19:19.066705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 19:19:19.073930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.073948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.073955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.079432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.079450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.079456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.087302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.087320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.087326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.094157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.094180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.094186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.100742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.100761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.107946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.107964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.107971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.114743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.114761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.114768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.119749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.119766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.119773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.126910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.126938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.134932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.134950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.134956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.141753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.141771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.141777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.148221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.148239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.148245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.154287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.154305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.154311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.162815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.162834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.162840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.171449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.171467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.171473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.179679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.179697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.179703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.187465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.187483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.187489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.196271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.196290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.196296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.202365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.202384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.202390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.207796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.207815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.207821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.214117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.214135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.214142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.219383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.219401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.219407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.226009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.226027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.226033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.234374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.234393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.234399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.241680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.241700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.241706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.247547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.247566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.247576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.253642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.076 [2024-11-26 19:19:19.253660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 19:19:19.253666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 19:19:19.259787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.077 [2024-11-26 19:19:19.259805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 19:19:19.259812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 19:19:19.265004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.077 [2024-11-26 19:19:19.265022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 19:19:19.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 19:19:19.272325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.077 [2024-11-26 19:19:19.272344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 19:19:19.272351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 19:19:19.278464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.077 [2024-11-26 19:19:19.278483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 19:19:19.278490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.285476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.285495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.285502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.292582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.292600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.292607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.299404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.299423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.305328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.305346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.305352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.310901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.310919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.310926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.319277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.338 [2024-11-26 19:19:19.319295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.338 [2024-11-26 19:19:19.319301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.338 [2024-11-26 19:19:19.325878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.325897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.325903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.332240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.338098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.338117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.338123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.344639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.344658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.344664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.351923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.351942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.351948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.358377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.358394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.358404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.364539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.364558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.364564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.370853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.370872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.370878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.377992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.378011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.378017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.385418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.385443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.391646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.391665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.391671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.397771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.397790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.397796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.405809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.405834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.413764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.413782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.413788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.420095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.420117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.420123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.426370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.426389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.426395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.432416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.432435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.432441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.441355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.441374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.441380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.450349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.450368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.450375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.458335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.458353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.458359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.467998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.468017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.476334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.476352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.476359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.486288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.486306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.486312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.493729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.493754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.500344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.500363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.500369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.507549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.507567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.339 [2024-11-26 19:19:19.507574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.339 [2024-11-26 19:19:19.514563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.339 [2024-11-26 19:19:19.514581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.514588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.340 [2024-11-26 19:19:19.520441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.340 [2024-11-26 19:19:19.520459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.520466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.340 [2024-11-26 19:19:19.526592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.340 [2024-11-26 19:19:19.526610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.526617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.340 [2024-11-26 19:19:19.531825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.340 [2024-11-26 19:19:19.531843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.531850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.340 [2024-11-26 19:19:19.538560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.340 [2024-11-26 19:19:19.538578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.538584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.340 [2024-11-26 19:19:19.544796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.340 [2024-11-26 19:19:19.544814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.340 [2024-11-26 19:19:19.544824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.550462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.550481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.550488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.556346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.556364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.556370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.563251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.563269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.563275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.571844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.571862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.571869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.578009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.578027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.578034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.584157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.584181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.584187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.590380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.590398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.590405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.597879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.597898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.597904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.605089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.605111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.605117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.612261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.612279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.612286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.618307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.618325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.618332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.624947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.624966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.624972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.631632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.631650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.631656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.637981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.637999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.638005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.643799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.643817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.643824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.651101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.651119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.651126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.657705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.657723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.657729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.601 [2024-11-26 19:19:19.664009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.601 [2024-11-26 19:19:19.664027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.601 [2024-11-26 19:19:19.664033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.669968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.669986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.669992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.672925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.679773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.679791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.686659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.686677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.686684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.691816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.691834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.691840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.698244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.698262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.706017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.706036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.706042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.715056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.715074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.715084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.721796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.721821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.727644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.727662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.727668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.733743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.733761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.733768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.741023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.741041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.741047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.748728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.748746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.748752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.754961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.754980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.754986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.760759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.760777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.760783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.767718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.767736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.767743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.775037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.775056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.775062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.780793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.780811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.786016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.786034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.602 4657.00 IOPS, 582.12 MiB/s [2024-11-26T18:19:19.815Z] [2024-11-26 19:19:19.793711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.793736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.798996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.799015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.799021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.804588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.804605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.804613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.602 [2024-11-26 19:19:19.809510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.602 [2024-11-26 19:19:19.809528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.602 [2024-11-26 19:19:19.809534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.863 [2024-11-26 19:19:19.814409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.863 [2024-11-26 19:19:19.814428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.863 [2024-11-26 19:19:19.814434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.863 [2024-11-26 19:19:19.819600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.863 [2024-11-26 19:19:19.819618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.863 [2024-11-26 19:19:19.819629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.863 [2024-11-26 19:19:19.824706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.824725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.824731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.829602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.829620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.829626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.834415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.834433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.834439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.839299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.839317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.839323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.844503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.844521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.844527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.849421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.849439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.849446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.854400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.854418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.854424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.859234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.859252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.859259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.864304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.864329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.864335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.869508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.869525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.869531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.872427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.872444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.872450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.876732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.876750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.876756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.882138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.882156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.882168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.886830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.886848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.892428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.892447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.892453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.899198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.899223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.907223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.907241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.907248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.914793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.914811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.914817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.921984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.922002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.922009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.927583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.927601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-11-26 19:19:19.927608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.864 [2024-11-26 19:19:19.935235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.864 [2024-11-26 19:19:19.935253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.935259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.944075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.944100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.951712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.951730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.951737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.963076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.963093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.963100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.970357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.970375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.970381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.976415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.976436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.982280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.982297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.982303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.988305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.988323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.988330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:19.994302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:19.994319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:19.994325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.001401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.001419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.001426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.008960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.008980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.008987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.019245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.019271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.025808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.025827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.025833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.031154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.031177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.031184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.038217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.038235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.038242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.044068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.044087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.044094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.052968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.052986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.052992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.061449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.061468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.061475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.865 [2024-11-26 19:19:20.068979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:02.865 [2024-11-26 19:19:20.068997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.865 [2024-11-26 19:19:20.069004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.074932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.074951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.074957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.081701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.081719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.081725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.089279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.089298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.089304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.096936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.096955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.096965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.103305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.103323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.103329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.109120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.109138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.109145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.115552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.115576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.121489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.121508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.121514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.129068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.129086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.129093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.136791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.136815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.142526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.142543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.142550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.145956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.145974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.145981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.152592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.152613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.154 [2024-11-26 19:19:20.152619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.154 [2024-11-26 19:19:20.158260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.154 [2024-11-26 19:19:20.158278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.158285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.164171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.164188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.164195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.169936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.169953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.169959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.175261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.175277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.175284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.180812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.180829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.180835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.186652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.186668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.186674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.192450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.192466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.192472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.197730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.197747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.197753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.203689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.203706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.208129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.208147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.208153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.213008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.217666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.217683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.217689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.222997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.223013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.223020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.228198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.228215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.228221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.233400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.233416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.233423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.238760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.238776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.238782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.244067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.244084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.244094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.249283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.249299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.249305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.254326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.254343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.259628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.259645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.259652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.264971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.264989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.264995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.155 [2024-11-26 19:19:20.270165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.155 [2024-11-26 19:19:20.270182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.155 [2024-11-26 19:19:20.270189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.275522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.275540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.275548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.280134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.280152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.280164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.287972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.287990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.287996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.294598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.294616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.294622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.300870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.300888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.300894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.306919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.306936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.306942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.314096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.314114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.314121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.322109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.322127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.322133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.328619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.328637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.328643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.334728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.334745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.334751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.341072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.341090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.341096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.347662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.347680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.347689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.356063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.356081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.356087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.156 [2024-11-26 19:19:20.361769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.156 [2024-11-26 19:19:20.361787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.156 [2024-11-26 19:19:20.361793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.417 [2024-11-26 19:19:20.368047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.417 [2024-11-26 19:19:20.368066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.417 [2024-11-26 19:19:20.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.417 [2024-11-26 19:19:20.373065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.417 [2024-11-26 19:19:20.373083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.417 [2024-11-26 19:19:20.373090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.417 [2024-11-26 19:19:20.378696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.417 [2024-11-26 19:19:20.378714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.417 [2024-11-26 19:19:20.378720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.417 [2024-11-26 19:19:20.385975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.417 [2024-11-26 19:19:20.385993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.417 [2024-11-26 19:19:20.385999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.417 [2024-11-26 19:19:20.393616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.417 [2024-11-26 19:19:20.393634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.417 [2024-11-26 19:19:20.393640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.400317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.406260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.406282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.406289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.412769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.412786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.412792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.420434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.420452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.420458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.427419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.427437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.427443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.434218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.434236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.434242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.439453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.439471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.439477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.445219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.445236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.450689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.450706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.450712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.456141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.456165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.456171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.461637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.461655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.461661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.466758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.466775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.466782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.471877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.471895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.471901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.477262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.477279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.477285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.482699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.482717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.482723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.488042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.488060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.488066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.493230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.493247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.493254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.497952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.497969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.497976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.502790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.502808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.502818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.507598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.507616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.507622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.512404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.512421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.512427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.517301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.517319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.418 [2024-11-26 19:19:20.517325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.418 [2024-11-26 19:19:20.522240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.418 [2024-11-26 19:19:20.522257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.522264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.527307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.527325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.527331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.532483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.532500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.532507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.537434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.537451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.537458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.542555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.542573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.542579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.547561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.547581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.547588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.550881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.554768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.554785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.554792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.559491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.559509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.559515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.564424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.564442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.564449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.569262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.569280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.569286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.574162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.574179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.574186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.579249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.579267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.579273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.584601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.584618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.589968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.589985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.589991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.595362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.595380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.595386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.600646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.600664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.600670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.605835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.605853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.605859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.611115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.611133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.611139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.616233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.616250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.616256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.419 [2024-11-26 19:19:20.621494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.419 [2024-11-26 19:19:20.621512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.419 [2024-11-26 19:19:20.621518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.626566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.626585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.626591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.631699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.631723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.631729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.636968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.636985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.636991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.642368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.642386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.642392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.647773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.647790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.647796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.653063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.653081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.653087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.658587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.658605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.658611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.663321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.663339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.663345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.668415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.681 [2024-11-26 19:19:20.668433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.681 [2024-11-26 19:19:20.668439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.681 [2024-11-26 19:19:20.673287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.673304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.673311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.678330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.678348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.678355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.683252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.683270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.683277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.688222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.688240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.688246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.693006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.693024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.693030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.698060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.698078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.698084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.703073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.703091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.703097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.708053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.708071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.708077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.713198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.713215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.713221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.718045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.718062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.718072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.722806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.722824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.722830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.728010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.728028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.728033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.732851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.732870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.732876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.737763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.737780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.737786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.743008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.743026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.743032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.747982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.747999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.748005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.753319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.753336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.753342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.759036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.759054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.759060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.764345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.764366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.764372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.769395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.769412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.769419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.774444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.774461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.774467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.779455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.779473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.779479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.784655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.784673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.784679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.682 [2024-11-26 19:19:20.789747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1559570) 00:29:03.682 [2024-11-26 19:19:20.789765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.682 [2024-11-26 19:19:20.789771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.682 5042.50 IOPS, 630.31 MiB/s 00:29:03.682 Latency(us) 00:29:03.682 [2024-11-26T18:19:20.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.682 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:03.682 nvme0n1 : 2.00 5043.42 630.43 0.00 0.00 3170.03 387.41 11304.96 00:29:03.682 [2024-11-26T18:19:20.895Z] =================================================================================================================== 00:29:03.682 [2024-11-26T18:19:20.895Z] Total : 5043.42 630.43 0.00 0.00 3170.03 387.41 11304.96 00:29:03.682 { 00:29:03.682 "results": [ 00:29:03.682 { 00:29:03.682 "job": "nvme0n1", 00:29:03.682 "core_mask": "0x2", 00:29:03.682 "workload": "randread", 00:29:03.682 "status": "finished", 00:29:03.682 "queue_depth": 16, 00:29:03.682 "io_size": 131072, 00:29:03.682 "runtime": 2.003004, 00:29:03.682 "iops": 5043.424775986468, 00:29:03.682 "mibps": 630.4280969983085, 00:29:03.682 "io_failed": 0, 00:29:03.682 "io_timeout": 0, 00:29:03.683 "avg_latency_us": 3170.030193361051, 00:29:03.683 "min_latency_us": 387.41333333333336, 00:29:03.683 "max_latency_us": 11304.96 00:29:03.683 } 00:29:03.683 ], 00:29:03.683 "core_count": 1 00:29:03.683 } 00:29:03.683 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.683 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.683 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.683 | .driver_specific 00:29:03.683 | .nvme_error 00:29:03.683 | .status_code 00:29:03.683 | .command_transient_transport_error' 00:29:03.683 19:19:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 326 > 0 )) 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3118387 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3118387 ']' 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3118387 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3118387 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.943 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.944 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3118387' 00:29:03.944 killing process with pid 3118387 00:29:03.944 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3118387 00:29:03.944 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.944 00:29:03.944 Latency(us) 00:29:03.944 [2024-11-26T18:19:21.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.944 [2024-11-26T18:19:21.157Z] =================================================================================================================== 00:29:03.944 [2024-11-26T18:19:21.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.944 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3118387 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3119173 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3119173 /var/tmp/bperf.sock 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3119173 ']' 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.204 19:19:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.204 [2024-11-26 19:19:21.233798] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:04.204 [2024-11-26 19:19:21.233855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119173 ] 00:29:04.204 [2024-11-26 19:19:21.315520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.204 [2024-11-26 19:19:21.344933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.143 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.402 nvme0n1 00:29:05.402 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:05.402 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.402 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.402 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.402 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.403 19:19:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.403 Running I/O for 2 seconds... 00:29:05.663 [2024-11-26 19:19:22.619487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee84c0 00:29:05.663 [2024-11-26 19:19:22.620589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.620615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.628127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef57b0 00:29:05.663 [2024-11-26 19:19:22.629199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.629215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.636763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6b70 00:29:05.663 [2024-11-26 19:19:22.637860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.637876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.644779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3e60 00:29:05.663 [2024-11-26 19:19:22.645758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.645774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.653705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6b70 00:29:05.663 [2024-11-26 19:19:22.654675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.654692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.662448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efeb58 00:29:05.663 [2024-11-26 19:19:22.663418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.671105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee95a0 00:29:05.663 [2024-11-26 19:19:22.672086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.672103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.679708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb760 00:29:05.663 [2024-11-26 19:19:22.680657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.680673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.663 [2024-11-26 19:19:22.688277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed920 00:29:05.663 [2024-11-26 19:19:22.689219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.663 [2024-11-26 19:19:22.689235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.696837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eefae0 00:29:05.664 [2024-11-26 19:19:22.697797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.697813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.705407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef35f0 00:29:05.664 [2024-11-26 19:19:22.706413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.706429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.714016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef57b0 00:29:05.664 [2024-11-26 19:19:22.715007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.722578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efc998 00:29:05.664 [2024-11-26 19:19:22.723571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.723587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.731156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efef90 00:29:05.664 [2024-11-26 19:19:22.732136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.732151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.739711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8d30 00:29:05.664 [2024-11-26 19:19:22.740706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.740722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.748290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeaef0 00:29:05.664 [2024-11-26 19:19:22.749260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.749276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.756855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed0b0 00:29:05.664 [2024-11-26 19:19:22.757834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.757850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.765431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeea00 00:29:05.664 [2024-11-26 19:19:22.766367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.766384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.774004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1430 00:29:05.664 [2024-11-26 19:19:22.774987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.775003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.782574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3e60 00:29:05.664 [2024-11-26 19:19:22.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.783578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.791131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6020 00:29:05.664 [2024-11-26 19:19:22.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.792139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.799704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efd208 00:29:05.664 [2024-11-26 19:19:22.800687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.800704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.808307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efeb58 00:29:05.664 [2024-11-26 19:19:22.809281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.809297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.816875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee95a0 00:29:05.664 [2024-11-26 19:19:22.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.817876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.825459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb760 00:29:05.664 [2024-11-26 19:19:22.826424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.826440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.833998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed920 00:29:05.664 [2024-11-26 19:19:22.834976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.834993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.842566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eefae0 00:29:05.664 [2024-11-26 19:19:22.843562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.843579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.851141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef35f0 00:29:05.664 [2024-11-26 19:19:22.852120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.852136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.859996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1710 00:29:05.664 [2024-11-26 19:19:22.861094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.861110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:05.664 [2024-11-26 19:19:22.868721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee27f0 00:29:05.664 [2024-11-26 19:19:22.869795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.664 [2024-11-26 19:19:22.869812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.877275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ede038 00:29:05.925 [2024-11-26 19:19:22.878375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.878391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.885822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:05.925 [2024-11-26 19:19:22.886916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.886932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.894376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:05.925 [2024-11-26 19:19:22.895471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.902952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4578 00:29:05.925 [2024-11-26 19:19:22.904003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.904019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.911525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:05.925 [2024-11-26 19:19:22.912624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.912639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.920080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6300 00:29:05.925 [2024-11-26 19:19:22.921175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.921191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.928636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5220 00:29:05.925 [2024-11-26 19:19:22.929754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.929769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.937207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:05.925 [2024-11-26 19:19:22.938285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.938301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.945775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eebfd0 00:29:05.925 [2024-11-26 19:19:22.946873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.946889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.954342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed0b0 00:29:05.925 [2024-11-26 19:19:22.955418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.955435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.962913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee190 00:29:05.925 [2024-11-26 19:19:22.964013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.964029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.971493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeea00 00:29:05.925 [2024-11-26 19:19:22.972585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.972600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.980048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0350 00:29:05.925 [2024-11-26 19:19:22.981138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.988629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7970 00:29:05.925 [2024-11-26 19:19:22.989737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.989753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:22.997199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee49b0 00:29:05.925 [2024-11-26 19:19:22.998266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:22.998282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:23.005764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:05.925 [2024-11-26 19:19:23.006834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:23.006853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:23.014320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee2c28 00:29:05.925 [2024-11-26 19:19:23.015388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:23.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:23.022867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ede470 00:29:05.925 [2024-11-26 19:19:23.023974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:23.023989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:23.031427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf550 00:29:05.925 [2024-11-26 19:19:23.032518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.925 [2024-11-26 19:19:23.032534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.925 [2024-11-26 19:19:23.040026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee0630 00:29:05.925 [2024-11-26 19:19:23.041134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.041150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.048599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4140 00:29:05.926 [2024-11-26 19:19:23.049721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.049737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.057192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8088 00:29:05.926 [2024-11-26 19:19:23.058271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.058286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.065772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5ec8 00:29:05.926 [2024-11-26 19:19:23.066871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.066886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.074336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1868 00:29:05.926 [2024-11-26 19:19:23.075423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.075438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.082902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2948 00:29:05.926 [2024-11-26 19:19:23.083997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.084013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.091494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eef6a8 00:29:05.926 [2024-11-26 19:19:23.092598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.092614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.100069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee5c8 00:29:05.926 [2024-11-26 19:19:23.101163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.101180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.108623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0788 00:29:05.926 [2024-11-26 19:19:23.109717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.109732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.117196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3a28 00:29:05.926 [2024-11-26 19:19:23.118288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.118304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:05.926 [2024-11-26 19:19:23.125747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef4b08 00:29:05.926 [2024-11-26 19:19:23.126853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.926 [2024-11-26 19:19:23.126869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.134327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4de8 00:29:06.187 [2024-11-26 19:19:23.135386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.135402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.142895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1710 00:29:06.187 [2024-11-26 19:19:23.144005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.144021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.151460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee27f0 00:29:06.187 [2024-11-26 19:19:23.152560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.152576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.160024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ede038 00:29:06.187 [2024-11-26 19:19:23.161141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.161157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.168599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:06.187 [2024-11-26 19:19:23.169712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.169728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.177164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:06.187 [2024-11-26 19:19:23.178242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.178258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.185726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4578 00:29:06.187 [2024-11-26 19:19:23.186816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.186831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.194316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.187 [2024-11-26 19:19:23.195419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.202871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6300 00:29:06.187 [2024-11-26 19:19:23.203972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.203988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.211438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5220 00:29:06.187 [2024-11-26 19:19:23.212524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.212540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.220001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.187 [2024-11-26 19:19:23.221094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.221111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.228624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eebfd0 00:29:06.187 [2024-11-26 19:19:23.229674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.229693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.237194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed0b0 00:29:06.187 [2024-11-26 19:19:23.238284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.238301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.245763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee190 00:29:06.187 [2024-11-26 19:19:23.246863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.246878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.254321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeea00 00:29:06.187 [2024-11-26 19:19:23.255415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.255432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.262891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0350 00:29:06.187 [2024-11-26 19:19:23.263975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.263991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.271459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7970 00:29:06.187 [2024-11-26 19:19:23.272554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.272571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.280024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee49b0 00:29:06.187 [2024-11-26 19:19:23.281112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.281129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.288604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:06.187 [2024-11-26 19:19:23.289696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.289712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.297173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee2c28 00:29:06.187 [2024-11-26 19:19:23.298218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.298234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.305728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ede470 00:29:06.187 [2024-11-26 19:19:23.306818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.187 [2024-11-26 19:19:23.306836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.187 [2024-11-26 19:19:23.314285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf550 00:29:06.187 [2024-11-26 19:19:23.315361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.315377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.322848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee0630 00:29:06.188 [2024-11-26 19:19:23.323945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.323961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.331427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4140 00:29:06.188 [2024-11-26 19:19:23.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.332535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.339990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8088 00:29:06.188 [2024-11-26 19:19:23.341085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.341101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.348550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5ec8 00:29:06.188 [2024-11-26 19:19:23.349647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.349663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.357139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1868 00:29:06.188 [2024-11-26 19:19:23.358232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.358247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.365724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2948 00:29:06.188 [2024-11-26 19:19:23.366783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.366800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.374296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eef6a8 00:29:06.188 [2024-11-26 19:19:23.375409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.375424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.382868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee5c8 00:29:06.188 [2024-11-26 19:19:23.383924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.383941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.188 [2024-11-26 19:19:23.391413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0788 00:29:06.188 [2024-11-26 19:19:23.392507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.188 [2024-11-26 19:19:23.392523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.399958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3a28 00:29:06.448 [2024-11-26 19:19:23.401051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.401067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.408524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef4b08 00:29:06.448 [2024-11-26 19:19:23.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.409631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.417086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4de8 00:29:06.448 [2024-11-26 19:19:23.418185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.418201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.425652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1710 00:29:06.448 [2024-11-26 19:19:23.426748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.426764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.434217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee27f0 00:29:06.448 [2024-11-26 19:19:23.435321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.435337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.442763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ede038 00:29:06.448 [2024-11-26 19:19:23.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.448 [2024-11-26 19:19:23.443883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.448 [2024-11-26 19:19:23.451319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:06.449 [2024-11-26 19:19:23.452415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.452430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.459891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:06.449 [2024-11-26 19:19:23.460999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.461015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.468463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4578 00:29:06.449 [2024-11-26 19:19:23.469564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.469580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.477012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.449 [2024-11-26 19:19:23.478113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.478129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.485580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6300 00:29:06.449 [2024-11-26 19:19:23.486637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.486652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.494119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5220 00:29:06.449 [2024-11-26 19:19:23.495211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.495226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.502690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.449 [2024-11-26 19:19:23.503782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.503797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.511248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eebfd0 00:29:06.449 [2024-11-26 19:19:23.512368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.512384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.519824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed0b0 00:29:06.449 [2024-11-26 19:19:23.520930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.527621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee84c0 00:29:06.449 [2024-11-26 19:19:23.528978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.528996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.535652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8088 00:29:06.449 [2024-11-26 19:19:23.536370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.536386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.544198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee4140 00:29:06.449 [2024-11-26 19:19:23.544931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.544947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.552764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6890 00:29:06.449 [2024-11-26 19:19:23.553521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.553536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.561359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efc998 00:29:06.449 [2024-11-26 19:19:23.562099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.569947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efda78 00:29:06.449 [2024-11-26 19:19:23.570695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.570711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.578493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efef90 00:29:06.449 [2024-11-26 19:19:23.579220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.579236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.587038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed4e8 00:29:06.449 [2024-11-26 19:19:23.587739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.587755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.595731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eec408 00:29:06.449 [2024-11-26 19:19:23.596460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.604303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb328 00:29:06.449 [2024-11-26 19:19:23.605065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.605081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.612849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eea248 00:29:06.449 29706.00 IOPS, 116.04 MiB/s [2024-11-26T18:19:23.662Z] [2024-11-26 19:19:23.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.613824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.621413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb048 00:29:06.449 [2024-11-26 19:19:23.622142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.622160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.629970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef9f68 00:29:06.449 [2024-11-26 19:19:23.630708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.630723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.638512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef96f8 00:29:06.449 [2024-11-26 19:19:23.639225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.639240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.647079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef8618 00:29:06.449 [2024-11-26 19:19:23.647814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.647829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.449 [2024-11-26 19:19:23.655740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb760 00:29:06.449 [2024-11-26 19:19:23.656434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.449 [2024-11-26 19:19:23.656450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.664317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1ca0 00:29:06.710 [2024-11-26 19:19:23.665046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.665062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.672855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5a90 00:29:06.710 [2024-11-26 19:19:23.673575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.673590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.681519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6b70 00:29:06.710 [2024-11-26 19:19:23.682230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.682246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.690084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3d08 00:29:06.710 [2024-11-26 19:19:23.690781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.690797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.698649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efe720 00:29:06.710 [2024-11-26 19:19:23.699363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.699378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.707216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eff3c8 00:29:06.710 [2024-11-26 19:19:23.707940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.707955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.715768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6fa8 00:29:06.710 [2024-11-26 19:19:23.716500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.716515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.724326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee9168 00:29:06.710 [2024-11-26 19:19:23.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.725071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.732865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeaef0 00:29:06.710 [2024-11-26 19:19:23.733595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.733610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.741436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee9e10 00:29:06.710 [2024-11-26 19:19:23.742178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.750003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8d30 00:29:06.710 [2024-11-26 19:19:23.750731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.750749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.758558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb480 00:29:06.710 [2024-11-26 19:19:23.759298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.759314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.767118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee73e0 00:29:06.710 [2024-11-26 19:19:23.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.767862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.775990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3e60 00:29:06.710 [2024-11-26 19:19:23.776829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.776845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.784722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef4f40 00:29:06.710 [2024-11-26 19:19:23.785577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.785593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.793290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7da8 00:29:06.710 [2024-11-26 19:19:23.794145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.794163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.801848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.710 [2024-11-26 19:19:23.802691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.802707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.710 [2024-11-26 19:19:23.810404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5220 00:29:06.710 [2024-11-26 19:19:23.811221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.710 [2024-11-26 19:19:23.811237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.818945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6300 00:29:06.711 [2024-11-26 19:19:23.819804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.819820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.827496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.711 [2024-11-26 19:19:23.828317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.828333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.836056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:06.711 [2024-11-26 19:19:23.836922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.836938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.844622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee49b0 00:29:06.711 [2024-11-26 19:19:23.845471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.845488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.853187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7970 00:29:06.711 [2024-11-26 19:19:23.854026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.861748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eebb98 00:29:06.711 [2024-11-26 19:19:23.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.862604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.870298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eecc78 00:29:06.711 [2024-11-26 19:19:23.871006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.878906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.711 [2024-11-26 19:19:23.879742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.879758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.887692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7da8 00:29:06.711 [2024-11-26 19:19:23.888530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.888545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.896260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.711 [2024-11-26 19:19:23.897103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.897118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.904827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7da8 00:29:06.711 [2024-11-26 19:19:23.905530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.905545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.711 [2024-11-26 19:19:23.913376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2510 00:29:06.711 [2024-11-26 19:19:23.914205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.711 [2024-11-26 19:19:23.914221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.921953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef7da8 00:29:06.973 [2024-11-26 19:19:23.922750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.922766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.931251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eea248 00:29:06.973 [2024-11-26 19:19:23.932167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.932182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.939764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:06.973 [2024-11-26 19:19:23.940678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.948330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.973 [2024-11-26 19:19:23.949226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.956893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb048 00:29:06.973 [2024-11-26 19:19:23.957770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.957785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.965477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:06.973 [2024-11-26 19:19:23.966364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.966379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.974059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:06.973 [2024-11-26 19:19:23.974981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.975000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.982632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eea248 00:29:06.973 [2024-11-26 19:19:23.983547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.991227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:06.973 [2024-11-26 19:19:23.992136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:23.992151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:23.999790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.973 [2024-11-26 19:19:24.000664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.000680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.008365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb048 00:29:06.973 [2024-11-26 19:19:24.009265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.009281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.016933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:06.973 [2024-11-26 19:19:24.017850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.017865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.025518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:06.973 [2024-11-26 19:19:24.026435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.026450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.034110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eea248 00:29:06.973 [2024-11-26 19:19:24.035037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.035053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.042693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016edf118 00:29:06.973 [2024-11-26 19:19:24.043563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.043579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.051265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3498 00:29:06.973 [2024-11-26 19:19:24.052180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-11-26 19:19:24.052196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.973 [2024-11-26 19:19:24.059851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb048 00:29:06.974 [2024-11-26 19:19:24.060771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.060786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.068434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee01f8 00:29:06.974 [2024-11-26 19:19:24.069328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.069343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.077036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee1b48 00:29:06.974 [2024-11-26 19:19:24.077810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.077826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.085844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee190 00:29:06.974 [2024-11-26 19:19:24.086906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.086921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.094553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efcdd0 00:29:06.974 [2024-11-26 19:19:24.095617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.095632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.103105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6cc8 00:29:06.974 [2024-11-26 19:19:24.104179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.104194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.111677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef5be8 00:29:06.974 [2024-11-26 19:19:24.112722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.112738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.120241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeff18 00:29:06.974 [2024-11-26 19:19:24.121316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.121332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.128805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeee38 00:29:06.974 [2024-11-26 19:19:24.129894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.129911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.137363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6020 00:29:06.974 [2024-11-26 19:19:24.138438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.138454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.145901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efbcf0 00:29:06.974 [2024-11-26 19:19:24.146969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.146985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.154459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efa7d8 00:29:06.974 [2024-11-26 19:19:24.155537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.155553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.163044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efdeb0 00:29:06.974 [2024-11-26 19:19:24.164131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.164147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.171624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb8b8 00:29:06.974 [2024-11-26 19:19:24.172708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.172724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.974 [2024-11-26 19:19:24.180183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb760 00:29:06.974 [2024-11-26 19:19:24.181232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-11-26 19:19:24.181247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.234 [2024-11-26 19:19:24.188727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1ca0 00:29:07.234 [2024-11-26 19:19:24.189788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.234 [2024-11-26 19:19:24.189803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.234 [2024-11-26 19:19:24.197290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5a90 00:29:07.234 [2024-11-26 19:19:24.198415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.234 [2024-11-26 19:19:24.198433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.234 [2024-11-26 19:19:24.205841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0bc0 00:29:07.234 [2024-11-26 19:19:24.206903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.234 [2024-11-26 19:19:24.206918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.234 [2024-11-26 19:19:24.214394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eef6a8 00:29:07.234 [2024-11-26 19:19:24.215475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.234 [2024-11-26 19:19:24.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.234 [2024-11-26 19:19:24.222959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee5c8 00:29:07.234 [2024-11-26 19:19:24.224027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.234 [2024-11-26 19:19:24.224043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.231513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0788 00:29:07.235 [2024-11-26 19:19:24.232583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.232599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.240079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3e60 00:29:07.235 [2024-11-26 19:19:24.241147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.241167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.248636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2d80 00:29:07.235 [2024-11-26 19:19:24.249706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.249722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.257203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1430 00:29:07.235 [2024-11-26 19:19:24.258273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.258288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.265765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eefae0 00:29:07.235 [2024-11-26 19:19:24.266836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.266852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.274322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efe720 00:29:07.235 [2024-11-26 19:19:24.275367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.275383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.282871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3d08 00:29:07.235 [2024-11-26 19:19:24.283946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.283961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.291423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee73e0 00:29:07.235 [2024-11-26 19:19:24.292458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.292473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.299987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb480 00:29:07.235 [2024-11-26 19:19:24.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.308546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8d30 00:29:07.235 [2024-11-26 19:19:24.309614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.309629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.317114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef20d8 00:29:07.235 [2024-11-26 19:19:24.318186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.318201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.325670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5658 00:29:07.235 [2024-11-26 19:19:24.326756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.326772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.334218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee6738 00:29:07.235 [2024-11-26 19:19:24.335280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.335296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.342766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee38d0 00:29:07.235 [2024-11-26 19:19:24.343844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.343860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.351350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eebfd0 00:29:07.235 [2024-11-26 19:19:24.352414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.352430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.359920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eed0b0 00:29:07.235 [2024-11-26 19:19:24.360995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.361011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.368500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee190 00:29:07.235 [2024-11-26 19:19:24.369578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.369594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.377047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efcdd0 00:29:07.235 [2024-11-26 19:19:24.378124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.378140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.385607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6cc8 00:29:07.235 [2024-11-26 19:19:24.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.386713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.394185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef5be8 00:29:07.235 [2024-11-26 19:19:24.395255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.395270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.402759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeff18 00:29:07.235 [2024-11-26 19:19:24.403791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.403807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.411352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeee38 00:29:07.235 [2024-11-26 19:19:24.412466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.412482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.419917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef6020 00:29:07.235 [2024-11-26 19:19:24.421008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.421026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.428485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efbcf0 00:29:07.235 [2024-11-26 19:19:24.429563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.429579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.235 [2024-11-26 19:19:24.437056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efa7d8 00:29:07.235 [2024-11-26 19:19:24.438122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.235 [2024-11-26 19:19:24.438138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.445637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efdeb0 00:29:07.496 [2024-11-26 19:19:24.446716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.446732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.454209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb8b8 00:29:07.496 [2024-11-26 19:19:24.455280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.455296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.462789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eeb760 00:29:07.496 [2024-11-26 19:19:24.463855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.463871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.471355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1ca0 00:29:07.496 [2024-11-26 19:19:24.472433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.472449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.479925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5a90 00:29:07.496 [2024-11-26 19:19:24.481020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.481036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.488498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0bc0 00:29:07.496 [2024-11-26 19:19:24.489575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.489591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.497087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eef6a8 00:29:07.496 [2024-11-26 19:19:24.498178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.498194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.505655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eee5c8 00:29:07.496 [2024-11-26 19:19:24.506736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.506752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.514223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef0788 00:29:07.496 [2024-11-26 19:19:24.515296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.515312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.522767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef3e60 00:29:07.496 [2024-11-26 19:19:24.523850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.523865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.531348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef2d80 00:29:07.496 [2024-11-26 19:19:24.532418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.532433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.539923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef1430 00:29:07.496 [2024-11-26 19:19:24.541008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.541024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.548491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016eefae0 00:29:07.496 [2024-11-26 19:19:24.549519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.549535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.557046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efe720 00:29:07.496 [2024-11-26 19:19:24.558134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.496 [2024-11-26 19:19:24.558150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.496 [2024-11-26 19:19:24.565637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee3d08 00:29:07.496 [2024-11-26 19:19:24.566668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.566683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 [2024-11-26 19:19:24.574197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee73e0 00:29:07.497 [2024-11-26 19:19:24.575271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.575287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 [2024-11-26 19:19:24.582767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016efb480 00:29:07.497 [2024-11-26 19:19:24.583838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.583854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 [2024-11-26 19:19:24.591483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee8d30 00:29:07.497 [2024-11-26 19:19:24.592570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.592587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 [2024-11-26 19:19:24.600053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ef20d8 00:29:07.497 [2024-11-26 19:19:24.601129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.601145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 [2024-11-26 19:19:24.608626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a3d0) with pdu=0x200016ee5658 00:29:07.497 [2024-11-26 19:19:24.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.497 [2024-11-26 19:19:24.609718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.497 29761.00 IOPS, 116.25 MiB/s 00:29:07.497 Latency(us) 00:29:07.497 [2024-11-26T18:19:24.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.497 nvme0n1 : 2.00 29778.25 116.32 0.00 0.00 4294.03 2007.04 10431.15 00:29:07.497 [2024-11-26T18:19:24.710Z] =================================================================================================================== 00:29:07.497 [2024-11-26T18:19:24.710Z] Total : 29778.25 116.32 0.00 0.00 4294.03 2007.04 10431.15 00:29:07.497 { 00:29:07.497 "results": [ 00:29:07.497 { 00:29:07.497 "job": "nvme0n1", 00:29:07.497 "core_mask": "0x2", 00:29:07.497 "workload": "randwrite", 00:29:07.497 "status": "finished", 00:29:07.497 "queue_depth": 128, 00:29:07.497 "io_size": 4096, 00:29:07.497 "runtime": 2.002569, 00:29:07.497 "iops": 29778.249838082982, 00:29:07.497 "mibps": 116.32128843001165, 00:29:07.497 "io_failed": 0, 00:29:07.497 "io_timeout": 0, 00:29:07.497 "avg_latency_us": 4294.025191868038, 00:29:07.497 "min_latency_us": 2007.04, 00:29:07.497 "max_latency_us": 10431.146666666667 00:29:07.497 } 00:29:07.497 ], 00:29:07.497 "core_count": 1 00:29:07.497 } 00:29:07.497 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.497 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.497 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.497 | .driver_specific 00:29:07.497 | .nvme_error 00:29:07.497 | .status_code 00:29:07.497 | .command_transient_transport_error' 00:29:07.497 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 233 > 0 )) 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3119173 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3119173 ']' 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3119173 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119173 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119173' 00:29:07.757 killing process with pid 3119173 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3119173 00:29:07.757 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.757 00:29:07.757 Latency(us) 00:29:07.757 [2024-11-26T18:19:24.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.757 [2024-11-26T18:19:24.970Z] =================================================================================================================== 00:29:07.757 [2024-11-26T18:19:24.970Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.757 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3119173 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:08.016 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3120038 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3120038 /var/tmp/bperf.sock 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3120038 ']' 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.017 19:19:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.017 [2024-11-26 19:19:25.041560] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:08.017 [2024-11-26 19:19:25.041619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120038 ] 00:29:08.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.017 Zero copy mechanism will not be used. 00:29:08.017 [2024-11-26 19:19:25.124326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.017 [2024-11-26 19:19:25.153564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.956 19:19:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.956 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.956 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.956 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.216 nvme0n1 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.216 19:19:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.216 Zero copy mechanism will not be used. 00:29:09.216 Running I/O for 2 seconds... 00:29:09.216 [2024-11-26 19:19:26.335112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.216 [2024-11-26 19:19:26.335295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.335320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.345364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.345621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.345639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.356548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.356806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.356828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.368956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.369216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.369232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.380650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.380910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.380926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.392421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.392701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.392719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.404453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.404715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.217 [2024-11-26 19:19:26.415895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.217 [2024-11-26 19:19:26.416165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.217 [2024-11-26 19:19:26.416188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.427421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.427668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.427685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.439152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.439397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.439412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.450765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.451002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.451018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.461931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.462173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.462189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.473296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.473572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.473588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.477 [2024-11-26 19:19:26.484963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.477 [2024-11-26 19:19:26.485250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.477 [2024-11-26 19:19:26.485265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.497134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.497451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.497466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.506506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.506574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.506589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.516136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.516387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.516402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.523624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.523681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.533503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.533551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.533567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.543047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.543305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.543321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.551215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.551471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.551487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.559225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.559428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.569722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.570026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.570043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.580740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.581062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.581078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.590261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.590313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.590328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.601277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.601590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.601608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.608708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.609005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.609022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.617686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.617889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.625699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.626022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.626043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.633942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.634245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.640446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.640795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.640812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.646361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.646553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.646569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.655016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.655335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.655352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.664059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.664363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.671272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.671461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.478 [2024-11-26 19:19:26.671477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.478 [2024-11-26 19:19:26.678919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.478 [2024-11-26 19:19:26.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.479 [2024-11-26 19:19:26.679225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.687290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.687612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.687629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.695644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.695849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.695864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.705871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.706206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.706222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.714857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.715183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.721078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.721272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.721289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.729000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.729312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.729329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.737274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.737544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.737558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.746065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.746370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.746387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.755059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.755381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.755397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.765408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.765738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.765755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.771627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.771818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.780414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.780621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.739 [2024-11-26 19:19:26.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.739 [2024-11-26 19:19:26.788389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.739 [2024-11-26 19:19:26.788714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.788731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.796464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.796670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.805899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.806211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.814279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.814596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.814613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.819321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.819634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.819650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.824679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.824869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.833643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.833927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.833947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.839907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.840210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.840225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.849534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.849856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.849873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.854396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.854587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.854603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.862334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.862652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.862668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.868860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.869049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.869065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.875565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.875756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.875771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.880036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.880359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.880376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.884516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.884714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.884730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.891054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.891327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.897206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.897425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.897441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.908520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.908752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.908767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.919472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.919689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.931075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.931443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.931460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.740 [2024-11-26 19:19:26.939709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:09.740 [2024-11-26 19:19:26.939897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.740 [2024-11-26 19:19:26.939912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.002 [2024-11-26 19:19:26.949679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.002 [2024-11-26 19:19:26.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.002 [2024-11-26 19:19:26.950014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.002 [2024-11-26 19:19:26.956721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.002 [2024-11-26 19:19:26.956911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.002 [2024-11-26 19:19:26.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.961527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.961717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.961732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.966133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.966295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.966309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.974681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.974989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.975005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.981074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.981267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.989650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.989897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.989913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:26.997098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:26.997376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:26.997391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.006184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.006401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.011467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.011666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.011682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.015985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.016190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.016206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.020068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.020271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.020290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.024217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.024407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.024422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.032430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.032630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.032646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.041210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.041537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.041554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.051202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.051599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.051615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.058840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.059029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.059045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.066083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.066126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.066141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.071167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.071358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.075798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.075986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.080641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.080859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.088377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.088674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.088690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.092534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.092722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.092737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.096425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.096614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.096630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.100186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.100373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.105181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.105373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.105389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.108909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.109112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.112850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.003 [2024-11-26 19:19:27.113039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.003 [2024-11-26 19:19:27.113054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.003 [2024-11-26 19:19:27.116966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.117155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.117175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.121337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.121528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.121544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.126534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.126724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.126740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.130199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.130387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.130403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.133998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.134192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.137893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.138081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.138097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.141258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.141447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.141463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.144856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.145046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.145061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.148929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.149116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.153094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.153288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.153308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.156939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.157129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.157145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.161141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.161334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.165047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.165349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.165366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.170125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.170319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.170335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.173505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.173695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.173710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.176982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.177176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.177191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.180834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.181021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.181036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.187079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.187402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.187418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.191969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.192168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.192184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.195906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.196094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.196109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.199987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.200180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.200195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.004 [2024-11-26 19:19:27.207066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.004 [2024-11-26 19:19:27.207361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.004 [2024-11-26 19:19:27.207378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.212129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.212333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.212349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.215909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.216097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.216112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.219869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.220057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.220072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.223843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.224032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.224047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.228084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.228278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.228294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.232024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.232217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.232233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.235982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.236175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.236190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.240156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.240361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.240376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.264 [2024-11-26 19:19:27.244113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.264 [2024-11-26 19:19:27.244309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.264 [2024-11-26 19:19:27.244324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.248117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.248312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.248328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.252029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.252224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.252240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.256832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.257130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.257147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.261666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.261856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.261872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.265267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.265455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.272602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.272789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.272805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.282457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.282686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.293427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.293726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.293743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.304869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.305129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.305146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.316046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.316257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.316273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.327044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.327255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.327271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 4298.00 IOPS, 537.25 MiB/s [2024-11-26T18:19:27.478Z] [2024-11-26 19:19:27.339120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.339330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.339346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.350107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.350338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.350354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.360922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.361211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.361225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.372587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.372923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.383329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.383596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.383619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.393324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.393604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.393626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.402996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.403244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.403259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.412953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.413197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.413213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.423403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.423725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.423741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.432907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.433075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.433091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.437113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.437289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.437305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.441594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.441764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.441780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.449622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.449950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.449967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.453847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.454015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.454030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.457953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.458123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.458139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.461827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.461993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.462008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.465979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.466138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.466154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.265 [2024-11-26 19:19:27.472850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.265 [2024-11-26 19:19:27.473015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.265 [2024-11-26 19:19:27.473030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.477852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.478013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.478029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.484642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.484842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.484861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.491658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.491821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.491837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.495624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.495789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.495804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.500579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.500746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.500761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.503870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.504031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.504046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.510455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.510620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.510636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.515211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.515368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.515384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.518862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.519018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.519034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.522640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.522796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.522812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.527372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.527534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.527549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.531656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.531820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.531835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.537701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.537861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.537877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.541442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.541605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.541621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.544894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.545055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.545070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.547954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.548115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.553628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.553790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.553806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.556691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.556853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.556869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.559709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.527 [2024-11-26 19:19:27.559869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.527 [2024-11-26 19:19:27.559884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.527 [2024-11-26 19:19:27.564495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.564808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.564825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.571221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.571523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.571540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.577252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.577588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.577604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.583637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.583985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.584002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.589884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.590223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.594165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.594325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.597188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.597350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.597365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.600250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.600408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.600423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.603465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.603629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.603648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.606501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.606664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.606680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.610143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.610310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.610326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.614490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.614651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.614667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.618888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.619051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.619066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.625420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.625763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.625780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.630962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.631120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.631135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.635460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.635623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.635638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.642614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.642775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.642791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.647057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.647226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.647242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.651579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.651757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.655844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.656001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.656017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.659829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.659985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.660002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.666443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.666749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.666767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.673361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.673525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.673541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.677192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.677354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.677369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.680482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.680643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.680659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.683786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.683950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.683965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.686933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.528 [2024-11-26 19:19:27.687095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.528 [2024-11-26 19:19:27.687111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.528 [2024-11-26 19:19:27.691326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.691490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.691506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.694413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.694576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.694591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.697498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.697661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.697676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.701773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.701931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.701947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.706435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.706594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.706610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.709455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.709614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.712224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.712385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.712400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.715219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.715383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.715403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.719655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.719844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.719860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.723496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.723658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.723673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.731300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.731461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.731477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.529 [2024-11-26 19:19:27.734331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.529 [2024-11-26 19:19:27.734492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.529 [2024-11-26 19:19:27.734507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.737200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.737362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.737377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.740445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.740607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.740623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.744405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.744565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.744580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.751488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.751646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.751662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.754718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.754882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.754897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.760297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.760462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.760477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.763654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.763816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.763832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.767109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.767272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.767287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.770240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.770401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.770416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.773233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.773395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.776268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.776429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.776445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.779868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.780034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.780050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.782981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.783143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.783164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.786284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.786446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.786462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.793993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.794157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.794178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.796791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.796951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.796967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.800085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.800249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.800265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.803811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.803983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.807793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.807958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.807974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.791 [2024-11-26 19:19:27.811048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.791 [2024-11-26 19:19:27.811214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.791 [2024-11-26 19:19:27.811230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.814377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.814542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.817281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.817441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.820559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.820716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.820731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.823882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.824048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.824063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.827620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.827797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.827812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.835854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.836130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.836145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.845753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.846039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.856866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.857203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.867770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.868055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.868071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.875724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.875886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.875901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.885625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.885903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.885920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.896456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.896767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.896783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.906425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.906594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.906610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.916838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.917047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.917062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.927135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.927452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.937668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.938003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.938019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.948270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.948477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.948492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.957428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.957677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.957693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.961575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.961782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.961797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.965286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.965443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.965458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.969030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.969206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.972190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.972345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.972360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.975388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.975541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.975557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.978207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.978357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.978372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.980832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.980985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.981000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.983676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.983827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.983842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.986453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.792 [2024-11-26 19:19:27.986605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.792 [2024-11-26 19:19:27.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.792 [2024-11-26 19:19:27.989249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.793 [2024-11-26 19:19:27.989398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.793 [2024-11-26 19:19:27.989417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.793 [2024-11-26 19:19:27.991864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.793 [2024-11-26 19:19:27.992010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.793 [2024-11-26 19:19:27.992025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.793 [2024-11-26 19:19:27.995208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:10.793 [2024-11-26 19:19:27.995389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.793 [2024-11-26 19:19:27.995404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.003092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.003371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.003388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.013126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.013338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.013354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.023940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.024221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.024236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.034106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.034342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.034358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.044114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.044300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.044316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.054728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.054987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.055002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.065329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.065561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.065576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.076283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.076540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.076556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.087189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.087407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.087422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.096928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.097175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.097190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.107125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.107375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.107390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.118112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.118386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.118401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.128756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.129084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.129102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.137526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.137800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.137817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.141599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.141740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.141755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.144331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.144475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.147314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.147455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.147470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.150368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.150511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.150527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.153205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.153348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.153363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.155910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.156054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.156069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.158485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.158628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.158643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.161016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.161165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.161180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.163681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.163821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.163836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.166548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.166689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.166707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.169237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.169377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.169392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.171769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.054 [2024-11-26 19:19:28.171912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.054 [2024-11-26 19:19:28.171927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.054 [2024-11-26 19:19:28.174277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.174424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.174439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.176876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.177071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.177086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.180261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.180438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.180453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.182851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.183010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.185345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.185508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.185524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.188692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.188879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.188895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.191299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.191446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.191461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.193817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.193960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.193976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.196430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.196574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.196589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.199048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.199196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.199211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.201530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.201672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.201687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.203982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.204125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.204140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.207063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.207254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.207270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.214549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.214838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.214855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.225298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.225505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.225520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.236094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.236325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.236341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.246334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.246642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.055 [2024-11-26 19:19:28.257079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.055 [2024-11-26 19:19:28.257296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.055 [2024-11-26 19:19:28.257311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.315 [2024-11-26 19:19:28.267863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.315 [2024-11-26 19:19:28.268041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.315 [2024-11-26 19:19:28.268056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.315 [2024-11-26 19:19:28.278069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.315 [2024-11-26 19:19:28.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.315 [2024-11-26 19:19:28.278440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.288881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.289178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.289194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.299764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.300012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.300028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.310629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.310908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.310925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.319426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.319692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.319711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.327528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.327668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.330442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.330583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.330598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.333186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.333331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.333346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.316 [2024-11-26 19:19:28.336179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.336319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.336334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.316 4935.00 IOPS, 616.88 MiB/s [2024-11-26T18:19:28.529Z] [2024-11-26 19:19:28.342699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x229a710) with pdu=0x200016eff3c8 00:29:11.316 [2024-11-26 19:19:28.342880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.316 [2024-11-26 19:19:28.342895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.316 00:29:11.316 Latency(us) 00:29:11.316 [2024-11-26T18:19:28.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.316 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.316 nvme0n1 : 2.01 4929.47 616.18 0.00 0.00 3239.21 1174.19 12342.61 00:29:11.316 [2024-11-26T18:19:28.529Z] =================================================================================================================== 00:29:11.316 [2024-11-26T18:19:28.529Z] Total : 4929.47 616.18 0.00 0.00 3239.21 1174.19 12342.61 00:29:11.316 { 00:29:11.316 "results": [ 00:29:11.316 { 00:29:11.316 "job": "nvme0n1", 00:29:11.316 "core_mask": "0x2", 00:29:11.316 "workload": "randwrite", 00:29:11.316 "status": "finished", 00:29:11.316 "queue_depth": 16, 00:29:11.316 "io_size": 131072, 00:29:11.316 "runtime": 2.006301, 00:29:11.316 "iops": 4929.469705692217, 00:29:11.316 "mibps": 616.1837132115271, 00:29:11.316 "io_failed": 0, 00:29:11.316 "io_timeout": 0, 00:29:11.316 "avg_latency_us": 3239.2057054263564, 00:29:11.316 "min_latency_us": 1174.1866666666667, 00:29:11.316 "max_latency_us": 12342.613333333333 00:29:11.316 } 00:29:11.316 ], 00:29:11.316 "core_count": 1 00:29:11.316 } 00:29:11.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.316 | .driver_specific 00:29:11.316 | .nvme_error 00:29:11.316 | .status_code 00:29:11.316 | .command_transient_transport_error' 00:29:11.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.316 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3120038 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3120038 ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3120038 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120038 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120038' 00:29:11.576 killing process with pid 3120038 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3120038 00:29:11.576 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.576 00:29:11.576 Latency(us) 00:29:11.576 [2024-11-26T18:19:28.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.576 [2024-11-26T18:19:28.789Z] =================================================================================================================== 00:29:11.576 [2024-11-26T18:19:28.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3120038 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3117561 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3117561 ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3117561 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3117561 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3117561' 00:29:11.576 killing process with pid 3117561 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3117561 00:29:11.576 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3117561 00:29:11.836 00:29:11.836 real 0m16.291s 00:29:11.836 user 0m32.377s 00:29:11.836 sys 0m3.518s 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.836 ************************************ 00:29:11.836 END TEST nvmf_digest_error 00:29:11.836 ************************************ 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.836 rmmod nvme_tcp 00:29:11.836 rmmod nvme_fabrics 00:29:11.836 rmmod nvme_keyring 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3117561 ']' 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3117561 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3117561 ']' 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3117561 00:29:11.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3117561) - No such process 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3117561 is not found' 00:29:11.836 Process with pid 3117561 is not found 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.836 19:19:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.428 00:29:14.428 real 0m43.209s 00:29:14.428 user 1m7.673s 00:29:14.428 sys 0m13.271s 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:14.428 ************************************ 00:29:14.428 END TEST nvmf_digest 00:29:14.428 ************************************ 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.428 ************************************ 00:29:14.428 START TEST nvmf_bdevperf 00:29:14.428 ************************************ 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.428 * Looking for test storage... 00:29:14.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.428 --rc genhtml_branch_coverage=1 00:29:14.428 --rc genhtml_function_coverage=1 00:29:14.428 --rc genhtml_legend=1 00:29:14.428 --rc geninfo_all_blocks=1 00:29:14.428 --rc geninfo_unexecuted_blocks=1 00:29:14.428 00:29:14.428 ' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.428 --rc genhtml_branch_coverage=1 00:29:14.428 --rc genhtml_function_coverage=1 00:29:14.428 --rc genhtml_legend=1 00:29:14.428 --rc geninfo_all_blocks=1 00:29:14.428 --rc geninfo_unexecuted_blocks=1 00:29:14.428 00:29:14.428 ' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.428 --rc genhtml_branch_coverage=1 00:29:14.428 --rc genhtml_function_coverage=1 00:29:14.428 --rc genhtml_legend=1 00:29:14.428 --rc geninfo_all_blocks=1 00:29:14.428 --rc geninfo_unexecuted_blocks=1 00:29:14.428 00:29:14.428 ' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.428 --rc genhtml_branch_coverage=1 00:29:14.428 --rc genhtml_function_coverage=1 00:29:14.428 --rc genhtml_legend=1 00:29:14.428 --rc geninfo_all_blocks=1 00:29:14.428 --rc geninfo_unexecuted_blocks=1 00:29:14.428 00:29:14.428 ' 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.428 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.429 19:19:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:22.714 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:22.714 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.714 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:22.714 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:22.715 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:29:22.715 00:29:22.715 --- 10.0.0.2 ping statistics --- 00:29:22.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.715 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:22.715 00:29:22.715 --- 10.0.0.1 ping statistics --- 00:29:22.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.715 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3124948 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3124948 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3124948 ']' 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.715 19:19:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 [2024-11-26 19:19:38.963804] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:22.715 [2024-11-26 19:19:38.963872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.715 [2024-11-26 19:19:39.048080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.715 [2024-11-26 19:19:39.101807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.715 [2024-11-26 19:19:39.101859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.715 [2024-11-26 19:19:39.101867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.715 [2024-11-26 19:19:39.101874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.715 [2024-11-26 19:19:39.101881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.715 [2024-11-26 19:19:39.103744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.715 [2024-11-26 19:19:39.103905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.715 [2024-11-26 19:19:39.103907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 [2024-11-26 19:19:39.844946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 Malloc0 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.715 [2024-11-26 19:19:39.916377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.715 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.977 { 00:29:22.977 "params": { 00:29:22.977 "name": "Nvme$subsystem", 00:29:22.977 "trtype": "$TEST_TRANSPORT", 00:29:22.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.977 "adrfam": "ipv4", 00:29:22.977 "trsvcid": "$NVMF_PORT", 00:29:22.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.977 "hdgst": ${hdgst:-false}, 00:29:22.977 "ddgst": ${ddgst:-false} 00:29:22.977 }, 00:29:22.977 "method": "bdev_nvme_attach_controller" 00:29:22.977 } 00:29:22.977 EOF 00:29:22.977 )") 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:22.977 19:19:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.977 "params": { 00:29:22.977 "name": "Nvme1", 00:29:22.977 "trtype": "tcp", 00:29:22.977 "traddr": "10.0.0.2", 00:29:22.977 "adrfam": "ipv4", 00:29:22.977 "trsvcid": "4420", 00:29:22.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.977 "hdgst": false, 00:29:22.977 "ddgst": false 00:29:22.977 }, 00:29:22.977 "method": "bdev_nvme_attach_controller" 00:29:22.977 }' 00:29:22.977 [2024-11-26 19:19:39.976614] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:22.977 [2024-11-26 19:19:39.976677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125143 ] 00:29:22.977 [2024-11-26 19:19:40.071183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.977 [2024-11-26 19:19:40.123867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.238 Running I/O for 1 seconds... 00:29:24.624 8596.00 IOPS, 33.58 MiB/s 00:29:24.624 Latency(us) 00:29:24.624 [2024-11-26T18:19:41.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.624 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:24.624 Verification LBA range: start 0x0 length 0x4000 00:29:24.624 Nvme1n1 : 1.01 8680.59 33.91 0.00 0.00 14683.72 2676.05 14527.15 00:29:24.624 [2024-11-26T18:19:41.837Z] =================================================================================================================== 00:29:24.624 [2024-11-26T18:19:41.837Z] Total : 8680.59 33.91 0.00 0.00 14683.72 2676.05 14527.15 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3125475 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.624 { 00:29:24.624 "params": { 00:29:24.624 "name": "Nvme$subsystem", 00:29:24.624 "trtype": "$TEST_TRANSPORT", 00:29:24.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.624 "adrfam": "ipv4", 00:29:24.624 "trsvcid": "$NVMF_PORT", 00:29:24.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.624 "hdgst": ${hdgst:-false}, 00:29:24.624 "ddgst": ${ddgst:-false} 00:29:24.624 }, 00:29:24.624 "method": "bdev_nvme_attach_controller" 00:29:24.624 } 00:29:24.624 EOF 00:29:24.624 )") 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:24.624 19:19:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.624 "params": { 00:29:24.624 "name": "Nvme1", 00:29:24.624 "trtype": "tcp", 00:29:24.624 "traddr": "10.0.0.2", 00:29:24.624 "adrfam": "ipv4", 00:29:24.624 "trsvcid": "4420", 00:29:24.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.624 "hdgst": false, 00:29:24.624 "ddgst": false 00:29:24.624 }, 00:29:24.624 "method": "bdev_nvme_attach_controller" 00:29:24.624 }' 00:29:24.624 [2024-11-26 19:19:41.629601] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:24.624 [2024-11-26 19:19:41.629675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125475 ] 00:29:24.624 [2024-11-26 19:19:41.722289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.624 [2024-11-26 19:19:41.773415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.885 Running I/O for 15 seconds... 00:29:27.212 9748.00 IOPS, 38.08 MiB/s [2024-11-26T18:19:44.689Z] 10570.50 IOPS, 41.29 MiB/s [2024-11-26T18:19:44.689Z] 19:19:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3124948 00:29:27.476 19:19:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:27.476 [2024-11-26 19:19:44.590044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.476 [2024-11-26 19:19:44.590230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.476 [2024-11-26 19:19:44.590240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.477 [2024-11-26 19:19:44.590715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.477 [2024-11-26 19:19:44.590724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.478 [2024-11-26 19:19:44.590880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.590991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.590999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.478 [2024-11-26 19:19:44.591181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.478 [2024-11-26 19:19:44.591188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.479 [2024-11-26 19:19:44.591289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.479 [2024-11-26 19:19:44.591637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.479 [2024-11-26 19:19:44.591644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.591985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.591995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.480 [2024-11-26 19:19:44.592086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.480 [2024-11-26 19:19:44.592095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.481 [2024-11-26 19:19:44.592382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174e170 is same with the state(6) to be set 00:29:27.481 [2024-11-26 19:19:44.592400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:27.481 [2024-11-26 19:19:44.592406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:27.481 [2024-11-26 19:19:44.592413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85240 len:8 PRP1 0x0 PRP2 0x0 00:29:27.481 [2024-11-26 19:19:44.592421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.481 [2024-11-26 19:19:44.592513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.481 [2024-11-26 19:19:44.592529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.481 [2024-11-26 19:19:44.592545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.481 [2024-11-26 19:19:44.592560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.481 [2024-11-26 19:19:44.592570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.481 [2024-11-26 19:19:44.596103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.481 [2024-11-26 19:19:44.596125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.481 [2024-11-26 19:19:44.596890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.481 [2024-11-26 19:19:44.596909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.481 [2024-11-26 19:19:44.596917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.481 [2024-11-26 19:19:44.597139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.481 [2024-11-26 19:19:44.597369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.481 [2024-11-26 19:19:44.597378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.481 [2024-11-26 19:19:44.597387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.481 [2024-11-26 19:19:44.597397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.481 [2024-11-26 19:19:44.610204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.481 [2024-11-26 19:19:44.610758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.481 [2024-11-26 19:19:44.610776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.481 [2024-11-26 19:19:44.610785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.481 [2024-11-26 19:19:44.611005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.481 [2024-11-26 19:19:44.611232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.481 [2024-11-26 19:19:44.611241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.481 [2024-11-26 19:19:44.611248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.481 [2024-11-26 19:19:44.611255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.481 [2024-11-26 19:19:44.624066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.481 [2024-11-26 19:19:44.624613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.481 [2024-11-26 19:19:44.624630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.482 [2024-11-26 19:19:44.624638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.482 [2024-11-26 19:19:44.624857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.482 [2024-11-26 19:19:44.625077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.482 [2024-11-26 19:19:44.625084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.482 [2024-11-26 19:19:44.625092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.482 [2024-11-26 19:19:44.625099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.482 [2024-11-26 19:19:44.637934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.482 [2024-11-26 19:19:44.638474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.482 [2024-11-26 19:19:44.638491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.482 [2024-11-26 19:19:44.638499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.482 [2024-11-26 19:19:44.638718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.482 [2024-11-26 19:19:44.638938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.482 [2024-11-26 19:19:44.638947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.482 [2024-11-26 19:19:44.638955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.482 [2024-11-26 19:19:44.638962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.482 [2024-11-26 19:19:44.651769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.482 [2024-11-26 19:19:44.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.482 [2024-11-26 19:19:44.652323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.482 [2024-11-26 19:19:44.652331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.482 [2024-11-26 19:19:44.652551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.482 [2024-11-26 19:19:44.652770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.482 [2024-11-26 19:19:44.652778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.482 [2024-11-26 19:19:44.652785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.482 [2024-11-26 19:19:44.652792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.482 [2024-11-26 19:19:44.665732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.482 [2024-11-26 19:19:44.666271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.482 [2024-11-26 19:19:44.666289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.482 [2024-11-26 19:19:44.666297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.482 [2024-11-26 19:19:44.666518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.482 [2024-11-26 19:19:44.666738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.482 [2024-11-26 19:19:44.666747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.482 [2024-11-26 19:19:44.666754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.482 [2024-11-26 19:19:44.666761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.482 [2024-11-26 19:19:44.679597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.482 [2024-11-26 19:19:44.680176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.482 [2024-11-26 19:19:44.680196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.482 [2024-11-26 19:19:44.680208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.482 [2024-11-26 19:19:44.680429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.482 [2024-11-26 19:19:44.680649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.482 [2024-11-26 19:19:44.680658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.482 [2024-11-26 19:19:44.680665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.482 [2024-11-26 19:19:44.680672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.744 [2024-11-26 19:19:44.693496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.744 [2024-11-26 19:19:44.694133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-26 19:19:44.694188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.744 [2024-11-26 19:19:44.694201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.744 [2024-11-26 19:19:44.694446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.744 [2024-11-26 19:19:44.694670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.744 [2024-11-26 19:19:44.694679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.744 [2024-11-26 19:19:44.694686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.744 [2024-11-26 19:19:44.694695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.744 [2024-11-26 19:19:44.707497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.744 [2024-11-26 19:19:44.708131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-26 19:19:44.708184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.744 [2024-11-26 19:19:44.708196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.744 [2024-11-26 19:19:44.708441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.744 [2024-11-26 19:19:44.708665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.744 [2024-11-26 19:19:44.708674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.744 [2024-11-26 19:19:44.708682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.744 [2024-11-26 19:19:44.708690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.744 [2024-11-26 19:19:44.721511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.744 [2024-11-26 19:19:44.722176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.744 [2024-11-26 19:19:44.722223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.722235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.722479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.722710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.722719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.722727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.722735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.735338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.735908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.735955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.735966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.736224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.736450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.736458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.736467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.736475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.749307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.749995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.750049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.750061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.750324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.750551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.750560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.750568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.750578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.763182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.763824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.763878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.763890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.764140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.764379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.764389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.764404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.764412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.777036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.777723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.777782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.777794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.778046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.778287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.778297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.778305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.778314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.790925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.791547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.791606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.791619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.791870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.792097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.792106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.792114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.792123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.804745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.805359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.805390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.805399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.805624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.805845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.805856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.805864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.805871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.818697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.819277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.819302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.819310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.819532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.819753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.819763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.819770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.819778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.832585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.833323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.833336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.833592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.833819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.833829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.833837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.833847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.846513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.847242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.745 [2024-11-26 19:19:44.847307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.745 [2024-11-26 19:19:44.847320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.745 [2024-11-26 19:19:44.847575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.745 [2024-11-26 19:19:44.847804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.745 [2024-11-26 19:19:44.847815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.745 [2024-11-26 19:19:44.847823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.745 [2024-11-26 19:19:44.847833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.745 [2024-11-26 19:19:44.860468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.745 [2024-11-26 19:19:44.861053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.861083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.861102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.861336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.861561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.861572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.861582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.861592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.874440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.875055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.875081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.875090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.875319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.875544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.875553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.875561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.875569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.888402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.889072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.889134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.889147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.889417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.889645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.889654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.889662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.889672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.902295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.903020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.903083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.903095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.903365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.903601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.903610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.903618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.903627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.916257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.916936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.916998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.917011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.917281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.917509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.917519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.917527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.917536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.930143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.930959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.930972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.931243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.931473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.931482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.931491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.931500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:27.746 [2024-11-26 19:19:44.944139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:27.746 [2024-11-26 19:19:44.944775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.746 [2024-11-26 19:19:44.944805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:27.746 [2024-11-26 19:19:44.944814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:27.746 [2024-11-26 19:19:44.945037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:27.746 [2024-11-26 19:19:44.945271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:27.746 [2024-11-26 19:19:44.945283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:27.746 [2024-11-26 19:19:44.945298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:27.746 [2024-11-26 19:19:44.945306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:44.958135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:44.958726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:44.958750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:44.958759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:44.958980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:44.959211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:44.959221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:44.959229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:44.959237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:44.972049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:44.972618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:44.972680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:44.972692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:44.972948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:44.973205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:44.973216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:44.973225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:44.973234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:44.985863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:44.986558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:44.986621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:44.986634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:44.986889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:44.987116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:44.987125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:44.987133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:44.987143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:44.999795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.000553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.000615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.000628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.000882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.001110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.001119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.001128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.001137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:45.013777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.014343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.014417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.014672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.014900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.014909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.014917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.014926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:45.027762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.029111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.029155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.029177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.029420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.029645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.029655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.029664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.029674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:45.041698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.042306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.042369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.042391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.042647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.042874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.042884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.042892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.042901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:45.055531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.056217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.056281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.056295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.056550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.056778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.056788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.056797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.056806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 [2024-11-26 19:19:45.069441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.070086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.070116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.070125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.070359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.070583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.009 [2024-11-26 19:19:45.070593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.009 [2024-11-26 19:19:45.070600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.009 [2024-11-26 19:19:45.070608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.009 8985.67 IOPS, 35.10 MiB/s [2024-11-26T18:19:45.222Z] [2024-11-26 19:19:45.085087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.009 [2024-11-26 19:19:45.085779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.009 [2024-11-26 19:19:45.085842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.009 [2024-11-26 19:19:45.085855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.009 [2024-11-26 19:19:45.086110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.009 [2024-11-26 19:19:45.086358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.086369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.086377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.086386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.099006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.099742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.099805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.099818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.100073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.100315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.100326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.100335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.100344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.112963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.113689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.113751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.113764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.114020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.114256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.114266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.114274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.114284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.126894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.127596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.127659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.127672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.127927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.128154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.128180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.128195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.128204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.140841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.141535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.141598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.141611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.141866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.142094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.142103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.142111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.142121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.154757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.155577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.155590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.155846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.156073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.156083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.156091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.156100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.168736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.169472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.169534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.169547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.169801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.170028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.170037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.170046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.170056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.182753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.183505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.183569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.183582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.183838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.184064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.184074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.184082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.184091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.196729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.197336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.197399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.197414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.197671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.197899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.197910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.197919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.010 [2024-11-26 19:19:45.197928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.010 [2024-11-26 19:19:45.210580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.010 [2024-11-26 19:19:45.211175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.010 [2024-11-26 19:19:45.211205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.010 [2024-11-26 19:19:45.211215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.010 [2024-11-26 19:19:45.211439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.010 [2024-11-26 19:19:45.211661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.010 [2024-11-26 19:19:45.211672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.010 [2024-11-26 19:19:45.211679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.011 [2024-11-26 19:19:45.211687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.273 [2024-11-26 19:19:45.224515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.273 [2024-11-26 19:19:45.225202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.273 [2024-11-26 19:19:45.225265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.273 [2024-11-26 19:19:45.225287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.273 [2024-11-26 19:19:45.225543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.273 [2024-11-26 19:19:45.225771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.273 [2024-11-26 19:19:45.225780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.273 [2024-11-26 19:19:45.225789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.273 [2024-11-26 19:19:45.225799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.273 [2024-11-26 19:19:45.238450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.273 [2024-11-26 19:19:45.239206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.273 [2024-11-26 19:19:45.239270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.239283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.239538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.239764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.239773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.239781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.239791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.252449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.253109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.253182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.253196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.253452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.253679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.253688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.253697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.253706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.266332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.267051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.267114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.267127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.267398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.267634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.267644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.267652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.267661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.280296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.281029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.281092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.281105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.281376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.281605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.281614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.281622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.281631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.293061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.293706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.293763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.293773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.293957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.294115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.294121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.294128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.294136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.305837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.306503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.306551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.306560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.306739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.306895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.306901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.306912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.306920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.318603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.319197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.319243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.319253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.319430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.319586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.319593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.319598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.319605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.331281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.331861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.331903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.331911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.332086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.332252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.332260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.332265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.332271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.343949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.344544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.344585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.344593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.344766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.344922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.344928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.344934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.344940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.356628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.357164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.357171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.357324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.357475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.357482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.357488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.357493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.369281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.369745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.369760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.369766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.369919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.370070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.370075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.370080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.370086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.381928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.382512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.382546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.382555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.382724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.382878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.382885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.382890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.382896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.394686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.395142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.395179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.395191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.395359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.395513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.395519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.395524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.395530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.407313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.407887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.407919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.407927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.408094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.408256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.408263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.408268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.408274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.420067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.420603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.274 [2024-11-26 19:19:45.420634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.274 [2024-11-26 19:19:45.420642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.274 [2024-11-26 19:19:45.420809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.274 [2024-11-26 19:19:45.420962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.274 [2024-11-26 19:19:45.420968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.274 [2024-11-26 19:19:45.420974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.274 [2024-11-26 19:19:45.420980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.274 [2024-11-26 19:19:45.432767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.274 [2024-11-26 19:19:45.433284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.275 [2024-11-26 19:19:45.433315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.275 [2024-11-26 19:19:45.433323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.275 [2024-11-26 19:19:45.433492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.275 [2024-11-26 19:19:45.433649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.275 [2024-11-26 19:19:45.433656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.275 [2024-11-26 19:19:45.433661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.275 [2024-11-26 19:19:45.433667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.275 [2024-11-26 19:19:45.445479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.275 [2024-11-26 19:19:45.446055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.275 [2024-11-26 19:19:45.446085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.275 [2024-11-26 19:19:45.446093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.275 [2024-11-26 19:19:45.446268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.275 [2024-11-26 19:19:45.446423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.275 [2024-11-26 19:19:45.446429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.275 [2024-11-26 19:19:45.446434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.275 [2024-11-26 19:19:45.446439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.275 [2024-11-26 19:19:45.458232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.275 [2024-11-26 19:19:45.458805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.275 [2024-11-26 19:19:45.458835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.275 [2024-11-26 19:19:45.458844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.275 [2024-11-26 19:19:45.459010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.275 [2024-11-26 19:19:45.459172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.275 [2024-11-26 19:19:45.459178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.275 [2024-11-26 19:19:45.459184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.275 [2024-11-26 19:19:45.459189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.275 [2024-11-26 19:19:45.470967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.275 [2024-11-26 19:19:45.471518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.275 [2024-11-26 19:19:45.471548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.275 [2024-11-26 19:19:45.471556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.275 [2024-11-26 19:19:45.471723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.275 [2024-11-26 19:19:45.471876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.275 [2024-11-26 19:19:45.471882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.275 [2024-11-26 19:19:45.471891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.275 [2024-11-26 19:19:45.471897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.483695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.484293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.484302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.484470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.484624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.484630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.484636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.484642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.496445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.497022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.497060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.497234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.497388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.497394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.497400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.497405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.509186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.509765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.509795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.509804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.509972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.510126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.510132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.510138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.510143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.521803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.522390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.522421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.522430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.522599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.522752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.522759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.522764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.522770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.534418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.535007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.535037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.535045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.535219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.535374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.535380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.535386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.535391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.547035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.547419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.547449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.539 [2024-11-26 19:19:45.547458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.539 [2024-11-26 19:19:45.547626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.539 [2024-11-26 19:19:45.547779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.539 [2024-11-26 19:19:45.547785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.539 [2024-11-26 19:19:45.547791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.539 [2024-11-26 19:19:45.547796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.539 [2024-11-26 19:19:45.559735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.539 [2024-11-26 19:19:45.560263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.539 [2024-11-26 19:19:45.560294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.560310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.560479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.560633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.560639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.560644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.560650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.572443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.573024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.573054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.573062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.573236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.573390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.573397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.573402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.573407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.585059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.585624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.585654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.585663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.585829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.585983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.585989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.585994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.586000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.597801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.598478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.598508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.598516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.598683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.598840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.598846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.598852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.598858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.610513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.611105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.611135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.611144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.611321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.611476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.611482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.611488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.611495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.623141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.623719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.623750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.623758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.623925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.624080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.624086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.624092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.624098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.635815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.636407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.636438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.636446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.636613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.636767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.636773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.636782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.636788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.648450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.648950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.648964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.648970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.649122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.649280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.649287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.649292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.649297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.661075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.661545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.661558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.661563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.661714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.661865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.661870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.661875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.661880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.673795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.540 [2024-11-26 19:19:45.674391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.540 [2024-11-26 19:19:45.674422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.540 [2024-11-26 19:19:45.674430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.540 [2024-11-26 19:19:45.674597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.540 [2024-11-26 19:19:45.674751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.540 [2024-11-26 19:19:45.674757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.540 [2024-11-26 19:19:45.674762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.540 [2024-11-26 19:19:45.674768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.540 [2024-11-26 19:19:45.686431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.541 [2024-11-26 19:19:45.686979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.541 [2024-11-26 19:19:45.687010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.541 [2024-11-26 19:19:45.687019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.541 [2024-11-26 19:19:45.687191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.541 [2024-11-26 19:19:45.687346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.541 [2024-11-26 19:19:45.687352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.541 [2024-11-26 19:19:45.687357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.541 [2024-11-26 19:19:45.687363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.541 [2024-11-26 19:19:45.699080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.541 [2024-11-26 19:19:45.699577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.541 [2024-11-26 19:19:45.699592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.541 [2024-11-26 19:19:45.699598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.541 [2024-11-26 19:19:45.699749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.541 [2024-11-26 19:19:45.699900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.541 [2024-11-26 19:19:45.699906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.541 [2024-11-26 19:19:45.699911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.541 [2024-11-26 19:19:45.699916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.541 [2024-11-26 19:19:45.711702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.541 [2024-11-26 19:19:45.712153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.541 [2024-11-26 19:19:45.712170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.541 [2024-11-26 19:19:45.712175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.541 [2024-11-26 19:19:45.712326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.541 [2024-11-26 19:19:45.712477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.541 [2024-11-26 19:19:45.712482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.541 [2024-11-26 19:19:45.712487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.541 [2024-11-26 19:19:45.712492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.541 [2024-11-26 19:19:45.724416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.541 [2024-11-26 19:19:45.724963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.541 [2024-11-26 19:19:45.724993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.541 [2024-11-26 19:19:45.725005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.541 [2024-11-26 19:19:45.725177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.541 [2024-11-26 19:19:45.725332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.541 [2024-11-26 19:19:45.725338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.541 [2024-11-26 19:19:45.725343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.541 [2024-11-26 19:19:45.725349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.541 [2024-11-26 19:19:45.737139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.541 [2024-11-26 19:19:45.737584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.541 [2024-11-26 19:19:45.737614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.541 [2024-11-26 19:19:45.737623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.541 [2024-11-26 19:19:45.737790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.541 [2024-11-26 19:19:45.737944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.541 [2024-11-26 19:19:45.737950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.541 [2024-11-26 19:19:45.737955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.541 [2024-11-26 19:19:45.737961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.803 [2024-11-26 19:19:45.749774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.803 [2024-11-26 19:19:45.750261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-26 19:19:45.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.803 [2024-11-26 19:19:45.750300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.803 [2024-11-26 19:19:45.750469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.803 [2024-11-26 19:19:45.750622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.803 [2024-11-26 19:19:45.750628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.803 [2024-11-26 19:19:45.750634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.803 [2024-11-26 19:19:45.750640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.803 [2024-11-26 19:19:45.762445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.803 [2024-11-26 19:19:45.763035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-26 19:19:45.763065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.803 [2024-11-26 19:19:45.763074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.803 [2024-11-26 19:19:45.763247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.803 [2024-11-26 19:19:45.763405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.803 [2024-11-26 19:19:45.763411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.803 [2024-11-26 19:19:45.763416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.803 [2024-11-26 19:19:45.763422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.803 [2024-11-26 19:19:45.775073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.803 [2024-11-26 19:19:45.775626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-26 19:19:45.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.803 [2024-11-26 19:19:45.775664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.803 [2024-11-26 19:19:45.775831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.803 [2024-11-26 19:19:45.775993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.803 [2024-11-26 19:19:45.775999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.803 [2024-11-26 19:19:45.776005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.803 [2024-11-26 19:19:45.776011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.803 [2024-11-26 19:19:45.787811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.788283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.788314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.788322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.788491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.788645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.788651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.788657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.788663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.800468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.801050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.801080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.801089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.801259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.801414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.801420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.801429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.801434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.813168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.813625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.813640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.813646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.813798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.813949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.813955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.813960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.813965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.825897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.826455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.826486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.826494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.826661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.826814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.826821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.826826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.826831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.838623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.839245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.839276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.839284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.839453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.839607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.839613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.839619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.839624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.851285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.851775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.851790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.851796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.851947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.852098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.852103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.852108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.852113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.863908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.864380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.864410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.864419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.864586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.864741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.864748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.864754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.864760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.876568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.877146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.877183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.877192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.877362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.877516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.877523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.877528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.877534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.889329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.889812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-26 19:19:45.889841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.804 [2024-11-26 19:19:45.889854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.804 [2024-11-26 19:19:45.890021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.804 [2024-11-26 19:19:45.890181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.804 [2024-11-26 19:19:45.890188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.804 [2024-11-26 19:19:45.890193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.804 [2024-11-26 19:19:45.890199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.804 [2024-11-26 19:19:45.901984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.804 [2024-11-26 19:19:45.902403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.902419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.902425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.902576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.902727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.902733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.902738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.902743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.914678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.915170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.915182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.915188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.915339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.915489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.915495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.915500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.915505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.927292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.927785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.927798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.927803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.927954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.928108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.928114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.928119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.928124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.939911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.940261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.940274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.940279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.940431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.940582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.940587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.940592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.940597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.952526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.953015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.953028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.953033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.953188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.953340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.953346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.953351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.953355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.965138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.965617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.965629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.965635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.965785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.965936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.965942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.965951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.965955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.977892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.978242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.978260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.978411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.978561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.978567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.978572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.978576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:45.990523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:45.990905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:45.990918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:45.990923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:45.991073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:45.991230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:45.991236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:45.991242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:45.991247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:28.805 [2024-11-26 19:19:46.003195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:28.805 [2024-11-26 19:19:46.003679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.805 [2024-11-26 19:19:46.003691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:28.805 [2024-11-26 19:19:46.003697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:28.805 [2024-11-26 19:19:46.003847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:28.805 [2024-11-26 19:19:46.003998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:28.805 [2024-11-26 19:19:46.004004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:28.805 [2024-11-26 19:19:46.004008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:28.805 [2024-11-26 19:19:46.004013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.067 [2024-11-26 19:19:46.015823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.067 [2024-11-26 19:19:46.016379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.067 [2024-11-26 19:19:46.016410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.067 [2024-11-26 19:19:46.016418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.067 [2024-11-26 19:19:46.016584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.067 [2024-11-26 19:19:46.016738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.067 [2024-11-26 19:19:46.016745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.067 [2024-11-26 19:19:46.016750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.067 [2024-11-26 19:19:46.016755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.067 [2024-11-26 19:19:46.028556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.067 [2024-11-26 19:19:46.029135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.067 [2024-11-26 19:19:46.029172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.067 [2024-11-26 19:19:46.029181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.067 [2024-11-26 19:19:46.029350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.067 [2024-11-26 19:19:46.029503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.067 [2024-11-26 19:19:46.029509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.067 [2024-11-26 19:19:46.029515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.067 [2024-11-26 19:19:46.029521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.067 [2024-11-26 19:19:46.041187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.067 [2024-11-26 19:19:46.041671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.067 [2024-11-26 19:19:46.041686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.067 [2024-11-26 19:19:46.041691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.067 [2024-11-26 19:19:46.041843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.067 [2024-11-26 19:19:46.041994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.067 [2024-11-26 19:19:46.042000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.067 [2024-11-26 19:19:46.042005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.067 [2024-11-26 19:19:46.042010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.067 [2024-11-26 19:19:46.053809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.067 [2024-11-26 19:19:46.054285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.067 [2024-11-26 19:19:46.054298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.067 [2024-11-26 19:19:46.054307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.067 [2024-11-26 19:19:46.054459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.067 [2024-11-26 19:19:46.054610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.067 [2024-11-26 19:19:46.054615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.067 [2024-11-26 19:19:46.054620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.067 [2024-11-26 19:19:46.054625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.067 [2024-11-26 19:19:46.066562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.067 [2024-11-26 19:19:46.066876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.067 [2024-11-26 19:19:46.066889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.067 [2024-11-26 19:19:46.066895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.067 [2024-11-26 19:19:46.067046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.067 [2024-11-26 19:19:46.067202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.067 [2024-11-26 19:19:46.067208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.067 [2024-11-26 19:19:46.067213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.067218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.079310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.079864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.079894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.079902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.080069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.080235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.080243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.080249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.080254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 6739.25 IOPS, 26.33 MiB/s [2024-11-26T18:19:46.281Z] [2024-11-26 19:19:46.092052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.092584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.092600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.092605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.092757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.092912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.092918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.092923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.092928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.104730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.105378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.105408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.105417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.105584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.105738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.105744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.105750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.105755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.117401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.117896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.117912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.117918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.118070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.118225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.118233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.118238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.118244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.130032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.130393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.130406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.130412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.130563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.130714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.130720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.130728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.130733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.142675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.143125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.143138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.143143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.143299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.143452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.143457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.143462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.143467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.155397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.155964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.155994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.156003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.156177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.156331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.156338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.156343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.156349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.168140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.168601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.168617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.168622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.168774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.168925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.168931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.168935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.168940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.180904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.181226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.181241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.181246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.181398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.181549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.181554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.068 [2024-11-26 19:19:46.181559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.068 [2024-11-26 19:19:46.181564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.068 [2024-11-26 19:19:46.193652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.068 [2024-11-26 19:19:46.194034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.068 [2024-11-26 19:19:46.194047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.068 [2024-11-26 19:19:46.194052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.068 [2024-11-26 19:19:46.194207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.068 [2024-11-26 19:19:46.194359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.068 [2024-11-26 19:19:46.194364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.194369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.194374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.206316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.206811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.206823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.206828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.206979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.207130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.207136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.207140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.207145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.218949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.219409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.219422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.219430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.219581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.219732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.219738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.219742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.219747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.231681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.232168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.232181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.232187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.232338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.232488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.232494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.232499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.232503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.244304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.244754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.244767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.244772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.244923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.245074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.245080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.245085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.245089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.257023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.257495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.257507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.257512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.257663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.257817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.257823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.257828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.257832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.069 [2024-11-26 19:19:46.269781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.069 [2024-11-26 19:19:46.270395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.069 [2024-11-26 19:19:46.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.069 [2024-11-26 19:19:46.270435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.069 [2024-11-26 19:19:46.270602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.069 [2024-11-26 19:19:46.270756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.069 [2024-11-26 19:19:46.270762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.069 [2024-11-26 19:19:46.270768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.069 [2024-11-26 19:19:46.270774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.331 [2024-11-26 19:19:46.282439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.331 [2024-11-26 19:19:46.282993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.331 [2024-11-26 19:19:46.283023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.331 [2024-11-26 19:19:46.283032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.331 [2024-11-26 19:19:46.283207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.331 [2024-11-26 19:19:46.283361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.331 [2024-11-26 19:19:46.283367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.331 [2024-11-26 19:19:46.283373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.331 [2024-11-26 19:19:46.283379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.331 [2024-11-26 19:19:46.295194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.331 [2024-11-26 19:19:46.295743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.331 [2024-11-26 19:19:46.295773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.331 [2024-11-26 19:19:46.295782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.331 [2024-11-26 19:19:46.295952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.331 [2024-11-26 19:19:46.296106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.331 [2024-11-26 19:19:46.296112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.331 [2024-11-26 19:19:46.296122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.331 [2024-11-26 19:19:46.296128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.331 [2024-11-26 19:19:46.307918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.331 [2024-11-26 19:19:46.308571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.331 [2024-11-26 19:19:46.308602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.331 [2024-11-26 19:19:46.308610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.331 [2024-11-26 19:19:46.308777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.308931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.308937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.308943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.308948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.320611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.321178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.321208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.321217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.321386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.321540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.321547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.321552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.321558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.333366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.333820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.333835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.333840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.333991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.334143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.334149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.334154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.334164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.346114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.346443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.346457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.346463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.346614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.346766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.346771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.346776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.346781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.358857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.359299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.359313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.359318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.359468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.359619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.359625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.359630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.359636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.371584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.372036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.372048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.372053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.372209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.372360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.372366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.372371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.372375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.384333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.384815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.332 [2024-11-26 19:19:46.384828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.332 [2024-11-26 19:19:46.384838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.332 [2024-11-26 19:19:46.384990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.332 [2024-11-26 19:19:46.385141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.332 [2024-11-26 19:19:46.385147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.332 [2024-11-26 19:19:46.385153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.332 [2024-11-26 19:19:46.385167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.332 [2024-11-26 19:19:46.396951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.332 [2024-11-26 19:19:46.397407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.397420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.397425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.397576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.397727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.397732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.397737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.397742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.409681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.410169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.410182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.410187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.410338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.410489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.410495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.410499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.410504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.422312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.422800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.422812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.422817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.422967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.423122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.423127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.423132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.423137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.434931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.435388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.435401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.435406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.435556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.435708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.435713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.435718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.435723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.447686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.448294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.448302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.448468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.448622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.448628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.448633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.448639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.460437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.460931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.460951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.461102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.461257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.461263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.461273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.461278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.473076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.473551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.473581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.473590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.473756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.473910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.473916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.333 [2024-11-26 19:19:46.473921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.333 [2024-11-26 19:19:46.473927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.333 [2024-11-26 19:19:46.485738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.333 [2024-11-26 19:19:46.486208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.333 [2024-11-26 19:19:46.486224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.333 [2024-11-26 19:19:46.486229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.333 [2024-11-26 19:19:46.486381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.333 [2024-11-26 19:19:46.486532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.333 [2024-11-26 19:19:46.486538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.334 [2024-11-26 19:19:46.486543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.334 [2024-11-26 19:19:46.486548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.334 [2024-11-26 19:19:46.498485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.334 [2024-11-26 19:19:46.499052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.334 [2024-11-26 19:19:46.499081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.334 [2024-11-26 19:19:46.499090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.334 [2024-11-26 19:19:46.499264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.334 [2024-11-26 19:19:46.499419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.334 [2024-11-26 19:19:46.499425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.334 [2024-11-26 19:19:46.499431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.334 [2024-11-26 19:19:46.499436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.334 [2024-11-26 19:19:46.511216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.334 [2024-11-26 19:19:46.511773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.334 [2024-11-26 19:19:46.511803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.334 [2024-11-26 19:19:46.511811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.334 [2024-11-26 19:19:46.511978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.334 [2024-11-26 19:19:46.512131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.334 [2024-11-26 19:19:46.512137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.334 [2024-11-26 19:19:46.512143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.334 [2024-11-26 19:19:46.512149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.334 [2024-11-26 19:19:46.523961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.334 [2024-11-26 19:19:46.524494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.334 [2024-11-26 19:19:46.524524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.334 [2024-11-26 19:19:46.524533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.334 [2024-11-26 19:19:46.524699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.334 [2024-11-26 19:19:46.524853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.334 [2024-11-26 19:19:46.524859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.334 [2024-11-26 19:19:46.524864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.334 [2024-11-26 19:19:46.524870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.334 [2024-11-26 19:19:46.536666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.334 [2024-11-26 19:19:46.537215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.334 [2024-11-26 19:19:46.537252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.334 [2024-11-26 19:19:46.537260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.334 [2024-11-26 19:19:46.537428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.334 [2024-11-26 19:19:46.537582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.334 [2024-11-26 19:19:46.537588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.334 [2024-11-26 19:19:46.537593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.334 [2024-11-26 19:19:46.537599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.549420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.549998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.596 [2024-11-26 19:19:46.550028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.596 [2024-11-26 19:19:46.550040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.596 [2024-11-26 19:19:46.550213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.596 [2024-11-26 19:19:46.550368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.596 [2024-11-26 19:19:46.550374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.596 [2024-11-26 19:19:46.550380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.596 [2024-11-26 19:19:46.550385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.562168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.562748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.596 [2024-11-26 19:19:46.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.596 [2024-11-26 19:19:46.562786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.596 [2024-11-26 19:19:46.562952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.596 [2024-11-26 19:19:46.563106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.596 [2024-11-26 19:19:46.563112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.596 [2024-11-26 19:19:46.563117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.596 [2024-11-26 19:19:46.563123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.574911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.575496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.596 [2024-11-26 19:19:46.575526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.596 [2024-11-26 19:19:46.575534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.596 [2024-11-26 19:19:46.575701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.596 [2024-11-26 19:19:46.575854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.596 [2024-11-26 19:19:46.575861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.596 [2024-11-26 19:19:46.575866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.596 [2024-11-26 19:19:46.575872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.587666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.596 [2024-11-26 19:19:46.588142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.596 [2024-11-26 19:19:46.588151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.596 [2024-11-26 19:19:46.588327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.596 [2024-11-26 19:19:46.588485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.596 [2024-11-26 19:19:46.588492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.596 [2024-11-26 19:19:46.588497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.596 [2024-11-26 19:19:46.588502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.600305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.600902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.596 [2024-11-26 19:19:46.600933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.596 [2024-11-26 19:19:46.600942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.596 [2024-11-26 19:19:46.601109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.596 [2024-11-26 19:19:46.601271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.596 [2024-11-26 19:19:46.601278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.596 [2024-11-26 19:19:46.601283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.596 [2024-11-26 19:19:46.601289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.596 [2024-11-26 19:19:46.612928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.596 [2024-11-26 19:19:46.613507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.613538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.613546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.613713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.613867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.613873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.613878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.613884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.625660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.626190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.626221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.626229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.626399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.626553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.626559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.626569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.626576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.638378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.638846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.638861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.638868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.639020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.639178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.639185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.639191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.639196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.651029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.651515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.651529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.651535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.651686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.651837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.651842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.651847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.651852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.663702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.664354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.664384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.664393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.664559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.664714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.664720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.664725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.664731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.676393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.676890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.676920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.676929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.677096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.677257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.677264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.677270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.677275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.689070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.689654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.689684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.689693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.689859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.690013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.690019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.597 [2024-11-26 19:19:46.690024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.597 [2024-11-26 19:19:46.690030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.597 [2024-11-26 19:19:46.701813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.597 [2024-11-26 19:19:46.702379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.597 [2024-11-26 19:19:46.702409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.597 [2024-11-26 19:19:46.702418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.597 [2024-11-26 19:19:46.702584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.597 [2024-11-26 19:19:46.702738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.597 [2024-11-26 19:19:46.702744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.702749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.702755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.714545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.715130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.715178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.715345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.715498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.715504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.715510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.715515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.727168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.727747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.727786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.727953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.728107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.728113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.728119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.728124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.739905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.740484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.740514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.740523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.740689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.740843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.740849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.740854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.740860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.752645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.753244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.753274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.753282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.753448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.753606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.753612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.753618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.753623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.765278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.765830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.765860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.765868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.766035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.766196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.766203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.766209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.766215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.778054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.778543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.778558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.778563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.778715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.778873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.778879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.778884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.778889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.790690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.791200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.791219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.791370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.791522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.598 [2024-11-26 19:19:46.791527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.598 [2024-11-26 19:19:46.791539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.598 [2024-11-26 19:19:46.791544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.598 [2024-11-26 19:19:46.803348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.598 [2024-11-26 19:19:46.803729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.598 [2024-11-26 19:19:46.803742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.598 [2024-11-26 19:19:46.803747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.598 [2024-11-26 19:19:46.803897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.598 [2024-11-26 19:19:46.804049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.599 [2024-11-26 19:19:46.804055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.599 [2024-11-26 19:19:46.804060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.599 [2024-11-26 19:19:46.804064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.861 [2024-11-26 19:19:46.816003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.861 [2024-11-26 19:19:46.816546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-11-26 19:19:46.816576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.861 [2024-11-26 19:19:46.816584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.861 [2024-11-26 19:19:46.816751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.861 [2024-11-26 19:19:46.816905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.861 [2024-11-26 19:19:46.816911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.861 [2024-11-26 19:19:46.816917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.861 [2024-11-26 19:19:46.816923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.861 [2024-11-26 19:19:46.828708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.861 [2024-11-26 19:19:46.829267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-11-26 19:19:46.829297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.861 [2024-11-26 19:19:46.829306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.861 [2024-11-26 19:19:46.829475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.861 [2024-11-26 19:19:46.829629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.861 [2024-11-26 19:19:46.829634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.861 [2024-11-26 19:19:46.829640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.861 [2024-11-26 19:19:46.829646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.861 [2024-11-26 19:19:46.841447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.861 [2024-11-26 19:19:46.842002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-11-26 19:19:46.842032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.861 [2024-11-26 19:19:46.842041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.861 [2024-11-26 19:19:46.842214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.861 [2024-11-26 19:19:46.842369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.861 [2024-11-26 19:19:46.842375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.861 [2024-11-26 19:19:46.842381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.861 [2024-11-26 19:19:46.842386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.861 [2024-11-26 19:19:46.854164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.861 [2024-11-26 19:19:46.854747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-11-26 19:19:46.854778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.861 [2024-11-26 19:19:46.854786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.861 [2024-11-26 19:19:46.854953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.861 [2024-11-26 19:19:46.855106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.861 [2024-11-26 19:19:46.855112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.861 [2024-11-26 19:19:46.855118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.861 [2024-11-26 19:19:46.855123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.861 [2024-11-26 19:19:46.866914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.861 [2024-11-26 19:19:46.867497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-11-26 19:19:46.867527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.861 [2024-11-26 19:19:46.867536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.861 [2024-11-26 19:19:46.867702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.861 [2024-11-26 19:19:46.867856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.861 [2024-11-26 19:19:46.867862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.861 [2024-11-26 19:19:46.867867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.867873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.879686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.880362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.880392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.880404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.880571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.880725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.880732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.880738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.880744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.892396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.892924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.892945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.893097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.893253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.893260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.893264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.893270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.905052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.905519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.905532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.905537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.905688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.905839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.905845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.905850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.905855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.917791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.918360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.918390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.918399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.918565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.918723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.918729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.918734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.918740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.930533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.931106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.931135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.931144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.931320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.931475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.931481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.931486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.931492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.943283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.943860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.943890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.943899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.944066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.944228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.944234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.944240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.944246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.956034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.956499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.956515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.956520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.956672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.862 [2024-11-26 19:19:46.956823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.862 [2024-11-26 19:19:46.956828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.862 [2024-11-26 19:19:46.956838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.862 [2024-11-26 19:19:46.956843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.862 [2024-11-26 19:19:46.968768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.862 [2024-11-26 19:19:46.969318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-11-26 19:19:46.969348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.862 [2024-11-26 19:19:46.969357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.862 [2024-11-26 19:19:46.969523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:46.969677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:46.969683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:46.969688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:46.969694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:46.981519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:46.982096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:46.982126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:46.982134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:46.982309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:46.982463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:46.982470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:46.982475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:46.982481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:46.994143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:46.994629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:46.994644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:46.994650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:46.994802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:46.994953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:46.994960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:46.994965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:46.994970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:47.006765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:47.007256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:47.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:47.007276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:47.007427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:47.007578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:47.007583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:47.007588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:47.007593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:47.019386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:47.019871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:47.019883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:47.019888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:47.020039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:47.020195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:47.020201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:47.020206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:47.020210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:47.032007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:47.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:47.032608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:47.032616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:47.032783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:47.032937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:47.032943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:47.032948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:47.032954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:47.044629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:47.045203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:47.045233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:47.045245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:47.045414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:47.045568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:47.045574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:47.045579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:47.045585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.863 [2024-11-26 19:19:47.057384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.863 [2024-11-26 19:19:47.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-11-26 19:19:47.057993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:29.863 [2024-11-26 19:19:47.058002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:29.863 [2024-11-26 19:19:47.058176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:29.863 [2024-11-26 19:19:47.058330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.863 [2024-11-26 19:19:47.058336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.863 [2024-11-26 19:19:47.058341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.863 [2024-11-26 19:19:47.058347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.070129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.070706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.070745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.070912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.071066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.071072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.071077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.071083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.082753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.083267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.083297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.083306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.083475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.083633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.083639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.083645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.083651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 5391.40 IOPS, 21.06 MiB/s [2024-11-26T18:19:47.339Z] [2024-11-26 19:19:47.095426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.096063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.096094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.096103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.096277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.096431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.096438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.096443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.096449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.108078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.108446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.108461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.108467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.108618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.108769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.108775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.108779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.108784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.120704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.121190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.121203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.121209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.121360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.121511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.121516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.121525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.121530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.133449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.134021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.134051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.134059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.134233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.134388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.134395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.134402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.134408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.146070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.146706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.146736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.146745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.146912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.147067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.147073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.147079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.147085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.158734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.159207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.126 [2024-11-26 19:19:47.159228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.126 [2024-11-26 19:19:47.159235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.126 [2024-11-26 19:19:47.159392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.126 [2024-11-26 19:19:47.159544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.126 [2024-11-26 19:19:47.159550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.126 [2024-11-26 19:19:47.159555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.126 [2024-11-26 19:19:47.159560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.126 [2024-11-26 19:19:47.171346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.126 [2024-11-26 19:19:47.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.171959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.171968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.172134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.172295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.172302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.172308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.172314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.183966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.184533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.184564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.184573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.184739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.184893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.184899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.184904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.184910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.196710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.197289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.197298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.197467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.197621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.197627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.197632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.197638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.209423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.209965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.209995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.210007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.210181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.210336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.210342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.210347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.210353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.222143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.222717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.222748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.222756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.222922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.223076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.223082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.223088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.223093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.234879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.235464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.235494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.235502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.235669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.235823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.235829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.235834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.235840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.247629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.248201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.248232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.248240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.248409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.248567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.248573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.248578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.248584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.260379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.260968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.260998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.261006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.261180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.261334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.261340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.261346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.261351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.273135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.273714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.273745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.273753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.273920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.274074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.274080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.274085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.274091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.285900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.286346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.127 [2024-11-26 19:19:47.286376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.127 [2024-11-26 19:19:47.286385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.127 [2024-11-26 19:19:47.286551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.127 [2024-11-26 19:19:47.286705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.127 [2024-11-26 19:19:47.286711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.127 [2024-11-26 19:19:47.286721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.127 [2024-11-26 19:19:47.286726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.127 [2024-11-26 19:19:47.298655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.127 [2024-11-26 19:19:47.299310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.128 [2024-11-26 19:19:47.299341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.128 [2024-11-26 19:19:47.299349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.128 [2024-11-26 19:19:47.299516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.128 [2024-11-26 19:19:47.299669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.128 [2024-11-26 19:19:47.299675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.128 [2024-11-26 19:19:47.299681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.128 [2024-11-26 19:19:47.299686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.128 [2024-11-26 19:19:47.311339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.128 [2024-11-26 19:19:47.311890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.128 [2024-11-26 19:19:47.311920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.128 [2024-11-26 19:19:47.311928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.128 [2024-11-26 19:19:47.312095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.128 [2024-11-26 19:19:47.312256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.128 [2024-11-26 19:19:47.312264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.128 [2024-11-26 19:19:47.312269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.128 [2024-11-26 19:19:47.312275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.128 [2024-11-26 19:19:47.324064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.128 [2024-11-26 19:19:47.324627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.128 [2024-11-26 19:19:47.324658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.128 [2024-11-26 19:19:47.324666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.128 [2024-11-26 19:19:47.324833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.128 [2024-11-26 19:19:47.324986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.128 [2024-11-26 19:19:47.324992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.128 [2024-11-26 19:19:47.324998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.128 [2024-11-26 19:19:47.325003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.399 [2024-11-26 19:19:47.336805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.399 [2024-11-26 19:19:47.337167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.399 [2024-11-26 19:19:47.337183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.399 [2024-11-26 19:19:47.337189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.399 [2024-11-26 19:19:47.337341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.399 [2024-11-26 19:19:47.337492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.399 [2024-11-26 19:19:47.337498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.399 [2024-11-26 19:19:47.337503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.399 [2024-11-26 19:19:47.337508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.399 [2024-11-26 19:19:47.349437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.399 [2024-11-26 19:19:47.349921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.399 [2024-11-26 19:19:47.349934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.399 [2024-11-26 19:19:47.349939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.399 [2024-11-26 19:19:47.350089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.399 [2024-11-26 19:19:47.350245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.399 [2024-11-26 19:19:47.350251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.399 [2024-11-26 19:19:47.350256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.399 [2024-11-26 19:19:47.350261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.399 [2024-11-26 19:19:47.362067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.399 [2024-11-26 19:19:47.362629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.399 [2024-11-26 19:19:47.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.399 [2024-11-26 19:19:47.362668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.400 [2024-11-26 19:19:47.362834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.400 [2024-11-26 19:19:47.362988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.400 [2024-11-26 19:19:47.362994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.400 [2024-11-26 19:19:47.362999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.400 [2024-11-26 19:19:47.363005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.400 [2024-11-26 19:19:47.374798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.400 [2024-11-26 19:19:47.375445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.400 [2024-11-26 19:19:47.375475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.400 [2024-11-26 19:19:47.375487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.400 [2024-11-26 19:19:47.375654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.400 [2024-11-26 19:19:47.375808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.400 [2024-11-26 19:19:47.375814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.400 [2024-11-26 19:19:47.375819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.400 [2024-11-26 19:19:47.375825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.400 [2024-11-26 19:19:47.387484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.400 [2024-11-26 19:19:47.388076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.400 [2024-11-26 19:19:47.388106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.400 [2024-11-26 19:19:47.388115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.400 [2024-11-26 19:19:47.388289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.400 [2024-11-26 19:19:47.388444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.400 [2024-11-26 19:19:47.388451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.400 [2024-11-26 19:19:47.388456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.401 [2024-11-26 19:19:47.388462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.401 [2024-11-26 19:19:47.400118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.401 [2024-11-26 19:19:47.400671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.401 [2024-11-26 19:19:47.400702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.401 [2024-11-26 19:19:47.400711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.401 [2024-11-26 19:19:47.400878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.401 [2024-11-26 19:19:47.401032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.401 [2024-11-26 19:19:47.401038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.401 [2024-11-26 19:19:47.401044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.401 [2024-11-26 19:19:47.401051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.401 [2024-11-26 19:19:47.412850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.401 [2024-11-26 19:19:47.413463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.401 [2024-11-26 19:19:47.413493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.401 [2024-11-26 19:19:47.413502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.401 [2024-11-26 19:19:47.413668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.401 [2024-11-26 19:19:47.413830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.401 [2024-11-26 19:19:47.413836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.401 [2024-11-26 19:19:47.413842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.401 [2024-11-26 19:19:47.413847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.401 [2024-11-26 19:19:47.425501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.401 [2024-11-26 19:19:47.426071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.401 [2024-11-26 19:19:47.426102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.401 [2024-11-26 19:19:47.426110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.401 [2024-11-26 19:19:47.426289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.401 [2024-11-26 19:19:47.426443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.401 [2024-11-26 19:19:47.426449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.401 [2024-11-26 19:19:47.426455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.401 [2024-11-26 19:19:47.426460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.401 [2024-11-26 19:19:47.438251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.401 [2024-11-26 19:19:47.438753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.401 [2024-11-26 19:19:47.438783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.401 [2024-11-26 19:19:47.438791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.402 [2024-11-26 19:19:47.438958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.402 [2024-11-26 19:19:47.439112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.402 [2024-11-26 19:19:47.439118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.402 [2024-11-26 19:19:47.439123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.402 [2024-11-26 19:19:47.439129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.402 [2024-11-26 19:19:47.450940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.402 [2024-11-26 19:19:47.451518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.402 [2024-11-26 19:19:47.451548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.402 [2024-11-26 19:19:47.451557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.402 [2024-11-26 19:19:47.451723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.402 [2024-11-26 19:19:47.451877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.402 [2024-11-26 19:19:47.451883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.402 [2024-11-26 19:19:47.451892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.402 [2024-11-26 19:19:47.451898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.402 [2024-11-26 19:19:47.463698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.402 [2024-11-26 19:19:47.464264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.402 [2024-11-26 19:19:47.464294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.402 [2024-11-26 19:19:47.464302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.402 [2024-11-26 19:19:47.464471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.402 [2024-11-26 19:19:47.464625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.402 [2024-11-26 19:19:47.464631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.402 [2024-11-26 19:19:47.464637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.402 [2024-11-26 19:19:47.464642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.402 [2024-11-26 19:19:47.476446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.402 [2024-11-26 19:19:47.477020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.402 [2024-11-26 19:19:47.477050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.402 [2024-11-26 19:19:47.477058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.402 [2024-11-26 19:19:47.477230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.402 [2024-11-26 19:19:47.477385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.402 [2024-11-26 19:19:47.477391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.402 [2024-11-26 19:19:47.477396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.403 [2024-11-26 19:19:47.477402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.403 [2024-11-26 19:19:47.489201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.403 [2024-11-26 19:19:47.489677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.403 [2024-11-26 19:19:47.489691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.403 [2024-11-26 19:19:47.489697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.403 [2024-11-26 19:19:47.489849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.403 [2024-11-26 19:19:47.489999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.403 [2024-11-26 19:19:47.490005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.403 [2024-11-26 19:19:47.490010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.403 [2024-11-26 19:19:47.490015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.403 [2024-11-26 19:19:47.501951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.403 [2024-11-26 19:19:47.502544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.403 [2024-11-26 19:19:47.502575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.403 [2024-11-26 19:19:47.502585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.403 [2024-11-26 19:19:47.502753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.403 [2024-11-26 19:19:47.502908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.403 [2024-11-26 19:19:47.502915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.403 [2024-11-26 19:19:47.502920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.403 [2024-11-26 19:19:47.502925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.403 [2024-11-26 19:19:47.514572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.403 [2024-11-26 19:19:47.515142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.403 [2024-11-26 19:19:47.515180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.403 [2024-11-26 19:19:47.515190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.403 [2024-11-26 19:19:47.515359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.403 [2024-11-26 19:19:47.515512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.403 [2024-11-26 19:19:47.515518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.403 [2024-11-26 19:19:47.515524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.403 [2024-11-26 19:19:47.515529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.403 [2024-11-26 19:19:47.527336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.403 [2024-11-26 19:19:47.527880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.403 [2024-11-26 19:19:47.527910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.403 [2024-11-26 19:19:47.527919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.403 [2024-11-26 19:19:47.528085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.403 [2024-11-26 19:19:47.528246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.403 [2024-11-26 19:19:47.528253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.404 [2024-11-26 19:19:47.528258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.404 [2024-11-26 19:19:47.528264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.404 [2024-11-26 19:19:47.540050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.404 [2024-11-26 19:19:47.540652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.404 [2024-11-26 19:19:47.540683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.404 [2024-11-26 19:19:47.540695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.404 [2024-11-26 19:19:47.540861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.404 [2024-11-26 19:19:47.541015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.404 [2024-11-26 19:19:47.541021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.404 [2024-11-26 19:19:47.541026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.404 [2024-11-26 19:19:47.541032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.404 [2024-11-26 19:19:47.552705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.404 [2024-11-26 19:19:47.553358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.404 [2024-11-26 19:19:47.553389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.404 [2024-11-26 19:19:47.553397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.404 [2024-11-26 19:19:47.553564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.404 [2024-11-26 19:19:47.553718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.404 [2024-11-26 19:19:47.553724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.404 [2024-11-26 19:19:47.553730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.404 [2024-11-26 19:19:47.553735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.404 [2024-11-26 19:19:47.565388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.404 [2024-11-26 19:19:47.565969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.405 [2024-11-26 19:19:47.565999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.405 [2024-11-26 19:19:47.566008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.405 [2024-11-26 19:19:47.566183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.405 [2024-11-26 19:19:47.566337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.405 [2024-11-26 19:19:47.566344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.405 [2024-11-26 19:19:47.566349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.405 [2024-11-26 19:19:47.566354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.405 [2024-11-26 19:19:47.578029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.405 [2024-11-26 19:19:47.578629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.405 [2024-11-26 19:19:47.578660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.405 [2024-11-26 19:19:47.578669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.405 [2024-11-26 19:19:47.578837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.405 [2024-11-26 19:19:47.578995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.405 [2024-11-26 19:19:47.579001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.405 [2024-11-26 19:19:47.579006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.405 [2024-11-26 19:19:47.579012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3124948 Killed "${NVMF_APP[@]}" "$@" 00:29:30.405 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:30.405 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:30.405 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.405 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.405 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.405 [2024-11-26 19:19:47.590819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.405 [2024-11-26 19:19:47.591304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.405 [2024-11-26 19:19:47.591335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.405 [2024-11-26 19:19:47.591343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.405 [2024-11-26 19:19:47.591512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.406 [2024-11-26 19:19:47.591666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.406 [2024-11-26 19:19:47.591673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.406 [2024-11-26 19:19:47.591679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.406 [2024-11-26 19:19:47.591685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3126522 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3126522 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3126522 ']' 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.406 19:19:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:30.668 [2024-11-26 19:19:47.603495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.668 [2024-11-26 19:19:47.604054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.668 [2024-11-26 19:19:47.604085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.668 [2024-11-26 19:19:47.604094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.668 [2024-11-26 19:19:47.604271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.668 [2024-11-26 19:19:47.604426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.668 [2024-11-26 19:19:47.604433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.668 [2024-11-26 19:19:47.604438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.668 [2024-11-26 19:19:47.604444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.668 [2024-11-26 19:19:47.616121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.668 [2024-11-26 19:19:47.616660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.668 [2024-11-26 19:19:47.616691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.668 [2024-11-26 19:19:47.616700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.668 [2024-11-26 19:19:47.616867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.668 [2024-11-26 19:19:47.617022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.668 [2024-11-26 19:19:47.617029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.668 [2024-11-26 19:19:47.617035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.617040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.628847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.629343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.629359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.629365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.629517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.629668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.629674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.629679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.629684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.641473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.641930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.641943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.641948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.642099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.642255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.642266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.642272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.642277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.647713] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:30.669 [2024-11-26 19:19:47.647766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.669 [2024-11-26 19:19:47.654218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.654675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.654688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.654694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.654844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.654996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.655002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.655008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.655015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.666953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.667494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.667525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.667533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.667700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.667855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.667861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.667867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.667873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.679619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.680228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.680259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.680267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.680437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.680590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.680601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.680607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.680613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.692361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.692815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.692831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.692836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.692988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.693139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.693145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.693150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.693155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.705086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.705681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.705711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.705720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.705887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.706040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.706047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.706052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.706059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.717707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.718211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.718241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.718249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.718419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.718574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.669 [2024-11-26 19:19:47.718588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.669 [2024-11-26 19:19:47.718594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.669 [2024-11-26 19:19:47.718604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.669 [2024-11-26 19:19:47.730408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.669 [2024-11-26 19:19:47.730881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.669 [2024-11-26 19:19:47.730911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.669 [2024-11-26 19:19:47.730920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.669 [2024-11-26 19:19:47.731089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.669 [2024-11-26 19:19:47.731249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.731257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.731262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.731268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.739115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.670 [2024-11-26 19:19:47.743057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.743637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.743668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.743677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.743846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.744000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.744007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.744013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.744019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.755691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.756211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.756233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.756385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.756536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.756541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.756547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.756552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.768363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.768447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.670 [2024-11-26 19:19:47.768468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.670 [2024-11-26 19:19:47.768475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.670 [2024-11-26 19:19:47.768481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.670 [2024-11-26 19:19:47.768485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.670 [2024-11-26 19:19:47.768866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.768879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.768885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.769036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.769194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.769201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.769206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.769210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.769666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.670 [2024-11-26 19:19:47.769816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.670 [2024-11-26 19:19:47.769818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.670 [2024-11-26 19:19:47.781002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.781509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.781522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.781528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.781679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.781831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.781837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.781842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.781847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.793667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.794151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.794170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.794176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.794327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.794484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.794490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.794495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.794500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.806300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.806775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.806788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.806793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.806945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.807096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.807102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.807107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.670 [2024-11-26 19:19:47.807112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.670 [2024-11-26 19:19:47.819050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.670 [2024-11-26 19:19:47.819631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.670 [2024-11-26 19:19:47.819665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.670 [2024-11-26 19:19:47.819675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.670 [2024-11-26 19:19:47.819849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.670 [2024-11-26 19:19:47.820003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.670 [2024-11-26 19:19:47.820009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.670 [2024-11-26 19:19:47.820015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.671 [2024-11-26 19:19:47.820021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.671 [2024-11-26 19:19:47.831671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.671 [2024-11-26 19:19:47.832183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.671 [2024-11-26 19:19:47.832200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.671 [2024-11-26 19:19:47.832206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.671 [2024-11-26 19:19:47.832360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.671 [2024-11-26 19:19:47.832512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.671 [2024-11-26 19:19:47.832517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.671 [2024-11-26 19:19:47.832523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.671 [2024-11-26 19:19:47.832532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.671 [2024-11-26 19:19:47.844344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.671 [2024-11-26 19:19:47.844941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.671 [2024-11-26 19:19:47.844972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.671 [2024-11-26 19:19:47.844981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.671 [2024-11-26 19:19:47.845148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.671 [2024-11-26 19:19:47.845308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.671 [2024-11-26 19:19:47.845315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.671 [2024-11-26 19:19:47.845321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.671 [2024-11-26 19:19:47.845327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.671 [2024-11-26 19:19:47.856981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.671 [2024-11-26 19:19:47.857455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.671 [2024-11-26 19:19:47.857471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.671 [2024-11-26 19:19:47.857476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.671 [2024-11-26 19:19:47.857628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.671 [2024-11-26 19:19:47.857779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.671 [2024-11-26 19:19:47.857786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.671 [2024-11-26 19:19:47.857791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.671 [2024-11-26 19:19:47.857796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.671 [2024-11-26 19:19:47.869731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.671 [2024-11-26 19:19:47.870202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.671 [2024-11-26 19:19:47.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.671 [2024-11-26 19:19:47.870220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.671 [2024-11-26 19:19:47.870372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.671 [2024-11-26 19:19:47.870523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.671 [2024-11-26 19:19:47.870528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.671 [2024-11-26 19:19:47.870533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.671 [2024-11-26 19:19:47.870538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.932 [2024-11-26 19:19:47.882485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.932 [2024-11-26 19:19:47.882950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-26 19:19:47.882963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.932 [2024-11-26 19:19:47.882968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.932 [2024-11-26 19:19:47.883120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.932 [2024-11-26 19:19:47.883277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.932 [2024-11-26 19:19:47.883284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.932 [2024-11-26 19:19:47.883289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.932 [2024-11-26 19:19:47.883294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.932 [2024-11-26 19:19:47.895224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.932 [2024-11-26 19:19:47.895832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-26 19:19:47.895863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.932 [2024-11-26 19:19:47.895871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.932 [2024-11-26 19:19:47.896038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.932 [2024-11-26 19:19:47.896197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.932 [2024-11-26 19:19:47.896205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.932 [2024-11-26 19:19:47.896212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.932 [2024-11-26 19:19:47.896218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.932 [2024-11-26 19:19:47.907873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.932 [2024-11-26 19:19:47.908465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-26 19:19:47.908495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.932 [2024-11-26 19:19:47.908505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.932 [2024-11-26 19:19:47.908671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.932 [2024-11-26 19:19:47.908825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.932 [2024-11-26 19:19:47.908831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.932 [2024-11-26 19:19:47.908837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.932 [2024-11-26 19:19:47.908843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.932 [2024-11-26 19:19:47.920497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.932 [2024-11-26 19:19:47.920847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.920862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.920871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.921023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.921178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.921184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.921189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.921194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.933126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.933584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.933598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.933604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.933755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.933906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.933912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.933917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.933921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.945858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.946340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.946354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.946359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.946509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.946660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.946666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.946671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.946675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.958598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.959133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.959169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.959178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.959344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.959502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.959508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.959513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.959519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.971313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.971880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.971911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.971920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.972086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.972246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.972253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.972259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.972264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.984074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.984580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.984586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.984737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.984888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.984893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.984898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.984903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:47.996765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:47.997197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:47.997212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:47.997217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:47.997369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:47.997519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:47.997525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:47.997530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:47.997543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:48.009487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:48.010021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:48.010051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:48.010060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:48.010233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:48.010388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:48.010394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:48.010400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:48.010406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:48.022206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:48.022692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:48.022708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:48.022713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:48.022864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:48.023015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:48.023021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:48.023026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:48.023031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:48.034816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:48.035313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-26 19:19:48.035326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.933 [2024-11-26 19:19:48.035332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.933 [2024-11-26 19:19:48.035483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.933 [2024-11-26 19:19:48.035634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.933 [2024-11-26 19:19:48.035640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.933 [2024-11-26 19:19:48.035645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.933 [2024-11-26 19:19:48.035650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.933 [2024-11-26 19:19:48.047441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.933 [2024-11-26 19:19:48.047910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.047922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.047927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.048078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.048234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.048241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.048246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.048251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.060182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.060660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.060672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.060677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.060828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.060979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.060984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.060989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.060994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.072920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.073460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.073491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.073499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.073666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.073820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.073826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.073831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.073837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.085664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.086172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.086187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.086196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.086348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.086499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.086505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.086510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.086516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 4492.83 IOPS, 17.55 MiB/s [2024-11-26T18:19:48.147Z] [2024-11-26 19:19:48.098296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.098725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.098755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.098764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.098931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.099085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.099092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.099097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.099103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.111052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.111672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.111703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.111711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.111878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.112031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.112037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.112043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.112048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.123716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.124215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.124231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.124236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.124387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.124543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.124549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.124554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.124559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.934 [2024-11-26 19:19:48.136357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.934 [2024-11-26 19:19:48.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-26 19:19:48.136867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:30.934 [2024-11-26 19:19:48.136872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:30.934 [2024-11-26 19:19:48.137023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:30.934 [2024-11-26 19:19:48.137179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.934 [2024-11-26 19:19:48.137185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.934 [2024-11-26 19:19:48.137190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.934 [2024-11-26 19:19:48.137195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.148998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.149426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.149438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.149444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.149594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.149745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.149751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.149757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.149761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.161702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.162206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.162226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.162232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.162388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.162541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.162547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.162556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.162562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.174368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.174833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.174851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.175002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.175153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.175163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.175169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.175173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.187107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.187680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.187711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.187719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.187886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.188040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.188046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.188052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.188058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.199861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.200373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.200389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.200394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.200546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.200697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.200702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.200707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.200712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.212509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.212976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.212989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.212995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.213146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.213303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.213309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.213314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.213319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.225257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.225670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.225701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.225709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.225876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.226030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.226036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.226041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.226047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.237987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.196 [2024-11-26 19:19:48.238563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.196 [2024-11-26 19:19:48.238594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.196 [2024-11-26 19:19:48.238602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.196 [2024-11-26 19:19:48.238772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.196 [2024-11-26 19:19:48.238925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.196 [2024-11-26 19:19:48.238931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.196 [2024-11-26 19:19:48.238936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.196 [2024-11-26 19:19:48.238942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.196 [2024-11-26 19:19:48.250611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.251201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.251232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.251244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.251413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.251567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.251573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.251579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.251585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.263244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.263684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.263714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.263723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.263890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.264044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.264050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.264055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.264061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.275991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.276473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.276488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.276494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.276646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.276796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.276802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.276807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.276812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.288612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.289069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.289088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.289244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.289400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.289406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.289411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.289415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.301338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.301796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.301809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.301814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.301964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.302115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.302121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.302126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.302130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.314042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.314539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.314552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.314558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.314708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.314859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.314865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.314870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.314874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.326664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.327211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.327241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.327250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.327419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.327573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.327579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.327588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.327594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.339386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.197 [2024-11-26 19:19:48.339976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.197 [2024-11-26 19:19:48.340006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.197 [2024-11-26 19:19:48.340014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.197 [2024-11-26 19:19:48.340187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.197 [2024-11-26 19:19:48.340342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.197 [2024-11-26 19:19:48.340348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.197 [2024-11-26 19:19:48.340354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.197 [2024-11-26 19:19:48.340360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.197 [2024-11-26 19:19:48.352016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.198 [2024-11-26 19:19:48.352496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.198 [2024-11-26 19:19:48.352511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.198 [2024-11-26 19:19:48.352517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.198 [2024-11-26 19:19:48.352668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.198 [2024-11-26 19:19:48.352819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.198 [2024-11-26 19:19:48.352825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.198 [2024-11-26 19:19:48.352830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.198 [2024-11-26 19:19:48.352834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.198 [2024-11-26 19:19:48.364763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.198 [2024-11-26 19:19:48.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.198 [2024-11-26 19:19:48.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.198 [2024-11-26 19:19:48.365309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.198 [2024-11-26 19:19:48.365478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.198 [2024-11-26 19:19:48.365632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.198 [2024-11-26 19:19:48.365639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.198 [2024-11-26 19:19:48.365644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.198 [2024-11-26 19:19:48.365650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.198 [2024-11-26 19:19:48.377457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.198 [2024-11-26 19:19:48.378024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.198 [2024-11-26 19:19:48.378054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.198 [2024-11-26 19:19:48.378063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.198 [2024-11-26 19:19:48.378237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.198 [2024-11-26 19:19:48.378392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.198 [2024-11-26 19:19:48.378399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.198 [2024-11-26 19:19:48.378405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.198 [2024-11-26 19:19:48.378411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.198 [2024-11-26 19:19:48.390213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.198 [2024-11-26 19:19:48.390671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.198 [2024-11-26 19:19:48.390686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.198 [2024-11-26 19:19:48.390692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.198 [2024-11-26 19:19:48.390844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.198 [2024-11-26 19:19:48.390996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.198 [2024-11-26 19:19:48.391002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.198 [2024-11-26 19:19:48.391007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.198 [2024-11-26 19:19:48.391012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.198 [2024-11-26 19:19:48.402947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.198 [2024-11-26 19:19:48.403438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.198 [2024-11-26 19:19:48.403469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.198 [2024-11-26 19:19:48.403478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.198 [2024-11-26 19:19:48.403648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.198 [2024-11-26 19:19:48.403802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.198 [2024-11-26 19:19:48.403809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.198 [2024-11-26 19:19:48.403816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.198 [2024-11-26 19:19:48.403822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.459 [2024-11-26 19:19:48.415627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.459 [2024-11-26 19:19:48.416185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-11-26 19:19:48.416216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.459 [2024-11-26 19:19:48.416229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.459 [2024-11-26 19:19:48.416396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.459 [2024-11-26 19:19:48.416549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.459 [2024-11-26 19:19:48.416556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.459 [2024-11-26 19:19:48.416561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.459 [2024-11-26 19:19:48.416567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.459 [2024-11-26 19:19:48.428361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.459 [2024-11-26 19:19:48.428927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-11-26 19:19:48.428958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.459 [2024-11-26 19:19:48.428966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.459 [2024-11-26 19:19:48.429133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.459 [2024-11-26 19:19:48.429293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.459 [2024-11-26 19:19:48.429300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.459 [2024-11-26 19:19:48.429306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.459 [2024-11-26 19:19:48.429312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 [2024-11-26 19:19:48.441099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.441768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.441798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.441807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.441974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.442129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.442135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.442140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.442146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.460 [2024-11-26 19:19:48.453820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.454286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.454305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.454311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.454463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.454614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.454619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.454624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.454629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 [2024-11-26 19:19:48.466576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.467141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.467178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.467187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.467354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.467510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.467517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.467523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.467529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 [2024-11-26 19:19:48.479330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.479916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.479947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.479956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.480123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.480283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.480292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.480298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.480305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.460 [2024-11-26 19:19:48.490474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.460 [2024-11-26 19:19:48.491955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.492200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.492220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.492226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.492383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.492535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.492541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.492546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.492551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.460 [2024-11-26 19:19:48.504628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.505113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.505143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.505152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.505325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.505480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.505486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.505491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.505497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 [2024-11-26 19:19:48.517286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.517768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.517783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.517790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.517941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.518092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.518098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.518104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.518109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 Malloc0 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.460 [2024-11-26 19:19:48.529899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.460 [2024-11-26 19:19:48.530467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-11-26 19:19:48.530498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.460 [2024-11-26 19:19:48.530507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.460 [2024-11-26 19:19:48.530674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.460 [2024-11-26 19:19:48.530829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.460 [2024-11-26 19:19:48.530835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.460 [2024-11-26 19:19:48.530841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.460 [2024-11-26 19:19:48.530847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.460 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.461 [2024-11-26 19:19:48.542641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.461 [2024-11-26 19:19:48.543251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-11-26 19:19:48.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.461 [2024-11-26 19:19:48.543291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.461 [2024-11-26 19:19:48.543457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.461 [2024-11-26 19:19:48.543611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.461 [2024-11-26 19:19:48.543618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.461 [2024-11-26 19:19:48.543623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.461 [2024-11-26 19:19:48.543629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.461 [2024-11-26 19:19:48.555294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.461 [2024-11-26 19:19:48.555893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-11-26 19:19:48.555927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173b010 with addr=10.0.0.2, port=4420 00:29:31.461 [2024-11-26 19:19:48.555935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b010 is same with the state(6) to be set 00:29:31.461 [2024-11-26 19:19:48.556102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b010 (9): Bad file descriptor 00:29:31.461 [2024-11-26 19:19:48.556262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.461 [2024-11-26 19:19:48.556268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.461 [2024-11-26 19:19:48.556274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.461 [2024-11-26 19:19:48.556280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.461 [2024-11-26 19:19:48.558299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.461 19:19:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3125475 00:29:31.461 [2024-11-26 19:19:48.567941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.721 [2024-11-26 19:19:48.677599] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:32.919 4621.14 IOPS, 18.05 MiB/s [2024-11-26T18:19:51.516Z] 5635.12 IOPS, 22.01 MiB/s [2024-11-26T18:19:52.459Z] 6443.44 IOPS, 25.17 MiB/s [2024-11-26T18:19:53.400Z] 7069.90 IOPS, 27.62 MiB/s [2024-11-26T18:19:54.341Z] 7598.55 IOPS, 29.68 MiB/s [2024-11-26T18:19:55.283Z] 8040.92 IOPS, 31.41 MiB/s [2024-11-26T18:19:56.225Z] 8407.69 IOPS, 32.84 MiB/s [2024-11-26T18:19:57.167Z] 8721.43 IOPS, 34.07 MiB/s 00:29:39.954 Latency(us) 00:29:39.954 [2024-11-26T18:19:57.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.954 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:39.954 Verification LBA range: start 0x0 length 0x4000 00:29:39.954 Nvme1n1 : 15.00 8999.19 35.15 13767.63 0.00 5603.67 552.96 13216.43 00:29:39.954 [2024-11-26T18:19:57.167Z] =================================================================================================================== 00:29:39.954 [2024-11-26T18:19:57.167Z] Total : 8999.19 35.15 13767.63 0.00 5603.67 552.96 13216.43 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:40.214 rmmod nvme_tcp 00:29:40.214 rmmod nvme_fabrics 00:29:40.214 rmmod nvme_keyring 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3126522 ']' 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3126522 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3126522 ']' 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3126522 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126522 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126522' 00:29:40.214 killing process with pid 3126522 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3126522 00:29:40.214 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3126522 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.475 19:19:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:42.384 00:29:42.384 real 0m28.408s 00:29:42.384 user 1m3.849s 00:29:42.384 sys 0m7.761s 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.384 ************************************ 00:29:42.384 END TEST nvmf_bdevperf 00:29:42.384 ************************************ 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.384 19:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.646 ************************************ 00:29:42.646 START TEST nvmf_target_disconnect 00:29:42.646 ************************************ 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.646 * Looking for test storage... 00:29:42.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.646 --rc genhtml_branch_coverage=1 00:29:42.646 --rc genhtml_function_coverage=1 00:29:42.646 --rc genhtml_legend=1 00:29:42.646 --rc geninfo_all_blocks=1 00:29:42.646 --rc geninfo_unexecuted_blocks=1 00:29:42.646 00:29:42.646 ' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.646 --rc genhtml_branch_coverage=1 00:29:42.646 --rc genhtml_function_coverage=1 00:29:42.646 --rc genhtml_legend=1 00:29:42.646 --rc geninfo_all_blocks=1 00:29:42.646 --rc geninfo_unexecuted_blocks=1 00:29:42.646 00:29:42.646 ' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.646 --rc genhtml_branch_coverage=1 00:29:42.646 --rc genhtml_function_coverage=1 00:29:42.646 --rc genhtml_legend=1 00:29:42.646 --rc geninfo_all_blocks=1 00:29:42.646 --rc geninfo_unexecuted_blocks=1 00:29:42.646 00:29:42.646 ' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.646 --rc genhtml_branch_coverage=1 00:29:42.646 --rc genhtml_function_coverage=1 00:29:42.646 --rc genhtml_legend=1 00:29:42.646 --rc geninfo_all_blocks=1 00:29:42.646 --rc geninfo_unexecuted_blocks=1 00:29:42.646 00:29:42.646 ' 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.646 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.647 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.907 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.907 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.907 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.907 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.907 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.908 19:19:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:51.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:51.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:51.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:51.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.045 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:29:51.046 00:29:51.046 --- 10.0.0.2 ping statistics --- 00:29:51.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.046 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:29:51.046 00:29:51.046 --- 10.0.0.1 ping statistics --- 00:29:51.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.046 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 ************************************ 00:29:51.046 START TEST nvmf_target_disconnect_tc1 00:29:51.046 ************************************ 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.046 [2024-11-26 19:20:07.673106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.046 [2024-11-26 19:20:07.673189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ae0 with addr=10.0.0.2, port=4420 00:29:51.046 [2024-11-26 19:20:07.673219] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:51.046 [2024-11-26 19:20:07.673240] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:51.046 [2024-11-26 19:20:07.673249] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:51.046 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:51.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:51.046 Initializing NVMe Controllers 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:51.046 00:29:51.046 real 0m0.146s 00:29:51.046 user 0m0.073s 00:29:51.046 sys 0m0.072s 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 ************************************ 00:29:51.046 END TEST nvmf_target_disconnect_tc1 00:29:51.046 ************************************ 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 ************************************ 00:29:51.046 START TEST nvmf_target_disconnect_tc2 00:29:51.046 ************************************ 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3132750 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3132750 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3132750 ']' 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.046 19:20:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 [2024-11-26 19:20:07.841600] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:51.046 [2024-11-26 19:20:07.841658] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.046 [2024-11-26 19:20:07.942405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.046 [2024-11-26 19:20:07.995516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.046 [2024-11-26 19:20:07.995567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.046 [2024-11-26 19:20:07.995576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.046 [2024-11-26 19:20:07.995583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.046 [2024-11-26 19:20:07.995590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.046 [2024-11-26 19:20:07.997602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:51.046 [2024-11-26 19:20:07.997762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:51.046 [2024-11-26 19:20:07.997922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.047 [2024-11-26 19:20:07.997922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.618 Malloc0 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.618 [2024-11-26 19:20:08.755455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.618 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.619 [2024-11-26 19:20:08.795900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3132901 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:51.619 19:20:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:54.189 19:20:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3132750 00:29:54.189 19:20:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Read completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 Write completed with error (sct=0, sc=8) 00:29:54.189 starting I/O failed 00:29:54.189 [2024-11-26 19:20:10.835947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.189 [2024-11-26 19:20:10.836548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.836620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.836975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.836989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.837432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.837497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.837853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.837867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.838426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.838489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.838721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.838736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.839082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.839095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.839441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.839454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.839805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.839817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.840012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.840025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.840238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.840251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.840577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.840589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.840889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.840901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.841176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.841188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.841562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.841574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.841872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.841883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.842200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.842211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.842552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.842564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.842907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.842919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.843117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-11-26 19:20:10.843129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.189 qpair failed and we were unable to recover it. 00:29:54.189 [2024-11-26 19:20:10.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.843464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.843828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.843839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.844176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.844188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.844555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.844566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.844886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.844898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.845258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.845269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.845473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.845486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.845806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.845818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.846167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.846179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.846546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.846558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.846910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.846921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.847259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.847270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.847499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.847510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.847791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.847800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.848124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.848135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.848475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.848485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.848839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.848859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.849186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.849197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.849535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.849545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.849733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.849743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.850096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.850406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.850416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.850736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.850747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.851131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.851459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.851708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.851719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.852017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.852028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.852281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.852587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.852597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.852907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.852919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.853239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.853251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.853550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.853561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.853745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.854129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.854139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.854504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.854515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.854863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.854873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.855261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.855272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.190 [2024-11-26 19:20:10.855587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.190 [2024-11-26 19:20:10.855598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.190 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.855835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.855846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.856182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.856194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.856489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.856501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.856864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.856875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.857258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.857270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.857615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.857626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.857945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.857957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.858270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.858282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.858592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.858603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.858919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.858934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.859263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.859276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.859505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.859517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.859857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.859869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.860218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.860231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.860427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.860440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.860666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.860678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.860986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.860998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.861304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.861316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.861527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.861539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.861854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.861866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.862207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.862219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.862530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.862542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.862852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.862863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.863194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.863206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.863512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.863524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.863845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.863858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.864191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.864204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.864593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.864604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.864926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.864938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.865288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.865300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.865615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.865627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.865821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.865835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.866157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.866176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.866491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.866504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.866830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.867124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.867136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.867476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.867492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.191 qpair failed and we were unable to recover it. 00:29:54.191 [2024-11-26 19:20:10.867810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.191 [2024-11-26 19:20:10.867822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.868050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.868459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.868829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.869166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.869183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.869406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.869421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.869753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.869768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.870116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.870131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.870517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.870534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.870866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.870880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.871199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.871215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.871532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.871547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.871868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.871883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.872225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.872548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.872563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.872908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.873234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.873253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.873558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.873575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.873904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.874228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.874244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.874460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.874475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.874777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.874791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.875142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.875157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.875501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.875516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.875840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.876078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.876094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.876340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.876357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.876696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.876712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.877034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.877050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.877385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.877402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.877720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.877735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.877962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.877977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.878306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.878322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.878639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.878653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.878987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.879001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.879331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.879347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.879691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.879705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.880024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.192 [2024-11-26 19:20:10.880040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.192 qpair failed and we were unable to recover it. 00:29:54.192 [2024-11-26 19:20:10.880374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.880395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.880714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.880733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.881062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.881082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.881414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.881435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.881774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.881793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.882114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.882133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.882483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.882504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.882828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.882848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.883192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.883214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.883557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.883577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.883903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.883922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.884269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.884290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.884641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.884661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.884984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.885003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.885339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.885359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.885697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.885717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.885942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.885961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.886280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.886300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.886623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.886644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.887072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.887090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.887416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.887437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.887769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.887789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.888111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.888129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.888503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.888523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.888777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.889138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.889168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.889361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.889381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.889712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.889731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.889950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.889969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.890222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.890247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.890585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.890604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.890928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.890947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.891275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.193 [2024-11-26 19:20:10.891297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.193 qpair failed and we were unable to recover it. 00:29:54.193 [2024-11-26 19:20:10.891631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.891650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.891993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.892012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.892213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.892233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.892573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.892597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.892954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.892981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.893361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.893388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.893750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.893776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.894138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.894177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.894526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.894551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.894920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.894944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.895311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.895338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.895700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.895725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.896090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.896117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.896363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.896744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.896769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.897138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.897180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.897573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.897598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.897946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.897971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.898322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.898348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.898715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.898742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.899111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.899137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.899482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.899517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.899857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.899882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.900122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.900153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.900503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.900529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.900894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.900919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.901299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.901327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.901661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.901685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.902027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.902052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.902341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.902368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.902756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.902783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.903141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.903177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.903527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.903554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.903908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.903939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.904297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.904324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.904699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.904728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.904995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.905418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.905451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.194 qpair failed and we were unable to recover it. 00:29:54.194 [2024-11-26 19:20:10.905829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.194 [2024-11-26 19:20:10.905858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.906217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.906246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.906604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.906633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.906992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.907021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.907265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.907296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.907706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.908101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.908130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.908287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.908321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.908652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.908682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.909032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.909064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.909415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.909446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.909802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.910089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.910118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.910518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.910550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.910907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.910937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.911375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.911407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.911744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.911772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.912115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.912143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.912528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.912557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.912800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.912828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.913199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.913230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.913587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.913617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.913984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.914013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.914382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.914413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.914813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.915221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.915578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.915609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.915984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.916012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.916404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.916438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.916707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.916736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.917093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.917123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.917489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.917521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.917878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.917909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.918271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.918302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.918567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.918599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.918866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.918895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.919234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.919264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.919473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.919502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.919887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.195 [2024-11-26 19:20:10.919918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.195 qpair failed and we were unable to recover it. 00:29:54.195 [2024-11-26 19:20:10.920276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.920359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.920725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.920756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.921131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.921181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.921535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.921565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.921940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.921969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.922218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.922253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.922628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.922658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.923024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.923053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.923403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.923435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.923790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.923819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.924188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.924219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.924575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.924604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.924975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.925004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.925384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.925415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.925808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.925847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.926206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.926238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.926602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.926631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.927022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.927392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.927422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.927787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.927816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.928192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.928222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.928595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.928623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.929013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.929359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.929389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.929731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.929760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.930103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.930133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.930451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.930482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.930861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.930890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.931244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.931274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.931655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.931682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.932045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.932425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.932456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.932806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.932833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.933202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.933596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.933624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.933985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.934013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.934393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.934423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.934784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.196 [2024-11-26 19:20:10.934814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.196 qpair failed and we were unable to recover it. 00:29:54.196 [2024-11-26 19:20:10.935190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.935221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.935582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.935612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.935971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.935999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.936381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.936796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.936825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.937084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.937116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.937515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.937545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.937983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.938011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.938335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.938365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.938695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.938724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.939027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.939064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.939395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.939425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.939796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.939824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.940201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.940230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.940591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.940619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.940982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.941010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.941392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.941421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.941683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.941714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.941953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.941985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.942218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.942252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.942604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.942633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.942997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.943027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.943279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.943309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.943553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.943583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.943932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.943960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.944324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.944356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.944726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.944754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.945207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.945562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.945590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.945836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.945868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.946234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.946273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.946634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.946664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.947002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.947032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.947399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.947429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.947800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.947829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.948186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.948216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.948575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.948993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.197 [2024-11-26 19:20:10.949023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.197 qpair failed and we were unable to recover it. 00:29:54.197 [2024-11-26 19:20:10.949388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.949418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.949779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.950182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.950212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.950485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.950513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.950892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.951256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.951288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.951645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.951675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.951999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.952029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.952420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.952450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.952789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.952817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.953180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.953211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.953618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.953647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.954009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.954037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.954279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.954308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.954656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.954685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.955106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.955134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.955544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.955574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.955917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.955948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.956331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.956363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.956743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.957082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.957111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.957464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.957495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.957875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.957904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.958267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.958298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.958662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.958692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.959065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.959094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.959470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.959501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.959747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.959779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.960109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.960139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.960483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.960515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.960944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.960974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.961318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.961348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.961717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.961746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.198 qpair failed and we were unable to recover it. 00:29:54.198 [2024-11-26 19:20:10.962096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.198 [2024-11-26 19:20:10.962132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.962546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.962930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.962960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.963322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.963352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.963723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.963753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.963992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.964389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.964420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.964797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.964825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.965208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.965240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.965618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.965646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.966012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.966041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.966784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.966814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.967593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.967623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.967988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.968020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.968396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.968426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.968799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.968828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.969198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.969229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.969584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.969613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.969995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.970023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.970403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.970433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.970806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.970835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.971198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.971229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.971605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.971940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.971968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.972322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.972352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.972736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.972770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.973141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.973277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.973656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.973685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.974053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.974081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.974430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.974772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.974800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.975178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.975210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.975597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.975626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.975974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.976003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.976482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.976516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.976861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.976889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.199 [2024-11-26 19:20:10.977232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.199 [2024-11-26 19:20:10.977263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.199 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.977647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.977676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.978015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.978043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.978385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.978418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.978762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.978792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.979190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.979221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.979668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.979696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.980038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.980072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.980426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.980457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.980807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.980836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.981183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.981213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.981561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.981590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.981965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.981993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.982382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.982412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.982767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.982802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.983141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.983189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.983592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.983629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.983972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.984000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.984363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.984393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.984732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.984762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.985114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.985143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.985502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.985531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.985888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.985917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.986275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.986306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.986656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.986685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.987026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.987054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.987477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.987507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.987867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.987895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.988299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.988328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.988716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.988744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.989082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.989111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.989513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.989543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.989891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.989920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.990282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.990312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.990664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.990694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.991054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.991083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.991413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.991443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.991803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.991831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.200 qpair failed and we were unable to recover it. 00:29:54.200 [2024-11-26 19:20:10.992190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.200 [2024-11-26 19:20:10.992220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.992603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.992631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.992961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.992990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.993367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.993396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.993756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.994129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.994157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.994593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.994623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.994992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.995021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.995365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.995397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.995663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.995690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.996042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.996070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.996426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.996457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.996811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.996839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.997208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.997238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.997626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.997655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.998005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.998035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.998459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.998489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.998839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.998868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.999131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.999170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.999533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.999563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:10.999945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:10.999973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.000385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.000416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.000780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.001205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.001235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.001640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.001669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.002025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.002053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.002475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.002504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.002861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.002892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.003263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.003294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.003628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.003657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.004018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.004046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.004424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.004454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.004825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.004854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.005214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.005245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.005591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.005621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.005999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.006027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.006394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.006425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.006826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.006856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.201 [2024-11-26 19:20:11.007191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.201 [2024-11-26 19:20:11.007223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.201 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.007468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.007501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.007881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.007910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.008265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.008295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.008659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.008688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.009063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.009090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.009456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.009487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.009846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.009876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.010195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.010232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.010605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.010634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.011000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.011029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.011386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.011419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.011824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.011854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.012188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.012218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.012621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.012650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.013009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.013037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.013411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.013440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.013801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.013831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.014178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.014208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.014585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.014614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.014974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.015003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.015391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.015421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.015780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.016184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.016217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.016557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.016588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.016926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.016957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.017303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.017333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.017785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.017814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.018183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.018215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.018578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.018607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.019023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.019053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.019410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.019442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.019796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.019826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.020207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.020236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.020599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.020627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.020895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.202 [2024-11-26 19:20:11.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.202 qpair failed and we were unable to recover it. 00:29:54.202 [2024-11-26 19:20:11.021287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.021318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.021609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.021637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.022079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.022107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.022498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.022896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.022926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.023270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.023304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.023683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.023711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.024066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.024094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.024456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.024486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.024847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.024877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.025237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.025267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.025582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.025618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.025993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.026021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.026366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.026397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.026730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.026757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.027125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.027154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.027532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.027562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.027923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.027952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.028103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.028136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.028520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.028550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.028911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.029318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.029348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.029702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.029732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.030078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.030107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.030477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.030508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.030868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.030896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.031259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.031296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.031650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.031679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.032014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.032043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.032305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.032679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.032707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.033005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.033033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.033404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.033436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.033795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.033825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.034184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.034215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.203 [2024-11-26 19:20:11.034577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.203 [2024-11-26 19:20:11.034605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.203 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.034958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.034986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.035381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.035751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.035779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.036172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.036202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.036616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.036646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.036970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.036999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.037328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.037359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.037730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.037759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.038108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.038137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.038380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.038414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.038797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.038828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.039191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.039223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.039569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.039598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.039959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.039988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.040365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.040393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.040760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.040788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.041171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.041202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.041630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.041660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.042019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.042392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.042422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.042799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.042827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.043180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.043213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.043573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.043602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.044012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.044040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.044391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.044421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.044760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.044788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.045052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.045081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.045470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.045501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.045874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.045905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.046280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.046311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.046712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.047096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.047130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.047533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.047565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.047926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.047955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.048328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.048358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.048727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.048755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.049058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.049086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.204 qpair failed and we were unable to recover it. 00:29:54.204 [2024-11-26 19:20:11.049493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.204 [2024-11-26 19:20:11.049523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.049897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.049926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.050293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.050323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.050754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.050783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.051148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.051187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.051522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.051550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.051795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.051825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.052187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.052216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.052605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.052972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.053001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.053352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.053381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.053747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.053776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.054136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.054176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.054519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.054909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.054939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.055304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.055334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.056056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.056084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.056425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.056454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.056819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.056850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.057091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.057123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.057497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.057535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.057811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.058181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.058211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.058597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.058626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.058982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.059011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.059434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.059465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.059825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.059853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.060199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.060572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.060602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.060875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.060904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.061282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.061312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.061655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.061685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.062037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.062067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.062407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.062437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.062770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.062800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.063153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.063198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.063556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.205 [2024-11-26 19:20:11.063585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.205 qpair failed and we were unable to recover it. 00:29:54.205 [2024-11-26 19:20:11.063852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.063880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.064219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.064250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.064599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.064627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.064994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.065021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.065390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.065420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.065846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.065875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.066211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.066240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.066567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.066595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.066964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.066992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.067237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.067266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.067659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.067693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.068066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.068427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.068458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.068798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.068827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.069077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.069105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.069510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.069869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.069899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.070262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.070293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.070663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.070691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.071059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.071088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.071419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.071448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.071804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.071833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.072199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.072230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.072503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.072533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.072912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.072941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.073298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.073329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.073734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.074026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.074054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.074415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.074445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.074805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.074836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.075196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.075227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.075602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.075631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.075999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.076028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.076363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.076393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.076730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.076760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.077123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.077151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.077529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.077559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.077929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.077957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.206 qpair failed and we were unable to recover it. 00:29:54.206 [2024-11-26 19:20:11.078320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.206 [2024-11-26 19:20:11.078352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.078709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.078737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.079100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.079131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.079470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.079500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.079860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.079888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.080281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.080640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.080669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.081020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.081047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.081422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.081453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.081816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.081844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.082202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.082232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.082587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.082615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.082983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.083011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.083399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.083429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.083782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.083811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.084068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.084097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.084446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.084476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.084823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.084852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.085216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.085245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.085626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.085655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.086417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.086447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.086806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.086835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.087205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.087235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.087613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.088039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.088391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.088422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.088772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.088802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.089207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.089473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.089505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.089913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.089942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.090285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.090315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.090718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.090749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.091098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.207 [2024-11-26 19:20:11.091130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.207 qpair failed and we were unable to recover it. 00:29:54.207 [2024-11-26 19:20:11.091481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.091510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.091877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.091905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.092274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.092306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.092674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.093035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.093065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.093474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.093505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.093836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.093872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.094229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.094259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.094658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.095020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.095050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.095394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.095432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.095778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.095807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.096180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.096213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.096570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.096598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.096840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.096874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.097218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.097249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.097590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.097995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.098353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.098384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.098631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.098664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.099045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.099075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.099440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.099471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.099815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.099845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.100205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.100236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.100613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.100643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.100999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.101029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.101379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.101410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.101786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.102182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.102212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.102616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.102982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.103011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.103380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.103410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.103777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.103808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.104181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.104219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.104657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.105016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.105045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.105415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.105446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.208 [2024-11-26 19:20:11.105793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.208 [2024-11-26 19:20:11.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.208 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.106196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.106226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.106553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.106582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.106944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.106972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.107352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.107740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.107769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.108119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.108147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.108548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.108580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.108934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.108963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.109409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.109440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.109720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.109748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.110095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.110124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.110486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.110518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.110873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.110904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.111261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.111290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.111644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.111672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.112040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.112070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.112338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.112370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.112729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.112757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.113062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.113094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.113472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.113503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.113862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.114249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.114279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.114650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.115029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.115403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.115435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.116200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.116232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.116587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.116616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.116975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.117005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.117375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.117405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.117669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.117703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.118074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.118103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.118467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.118500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.118871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.118900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.119261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.119292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.119645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.119675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.120034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.120065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.209 [2024-11-26 19:20:11.120439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.209 [2024-11-26 19:20:11.120470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.209 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.120828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.120857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.121213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.121244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.121614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.121644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.122039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.122404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.122434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.122797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.122826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.123180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.123210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.123636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.123994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.124024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.124395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.124429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.124766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.124794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.125180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.125211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.125597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.125630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.126023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.126421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.126452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.126819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.126850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.127229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.127262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.127620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.127648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.128016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.128046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.128421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.128453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.128805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.128846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.129198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.129228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.129581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.129610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.129963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.129994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.130349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.130380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.130744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.130782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.131141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.131185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.133239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.133705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.133741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.134118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.134147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.134536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.134807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.134842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.135223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.135254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.135623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.135654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.136018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.136048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.136461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.136491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.136853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.136883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.137240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.137270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.210 qpair failed and we were unable to recover it. 00:29:54.210 [2024-11-26 19:20:11.137647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.210 [2024-11-26 19:20:11.137677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.138051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.138080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.138430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.138461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.138809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.138839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.139199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.139231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.139603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.139632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.140000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.140030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.140393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.140425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.140784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.140815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.141180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.141211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.141460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.141488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.141836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.141868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.142219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.142252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.142619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.142648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.142998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.143035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.143383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.143414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.143767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.143797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.144038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.144067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.144419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.144449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.144814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.144843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.145195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.145624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.145654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.146022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.146051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.146491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.146522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.146869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.146900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.147239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.147270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.147634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.147666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.147923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.147954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.148320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.148351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.148720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.148749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.149096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.149124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.149542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.149574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.149915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.149943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.150299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.150330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.150708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.150739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.151185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.151218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.151580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.151609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.151973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.152003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.211 [2024-11-26 19:20:11.152335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.211 [2024-11-26 19:20:11.152365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.211 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.152707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.152736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.153103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.153131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.153500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.153537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.153934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.153966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.154318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.154348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.154712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.154743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.155095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.155123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.155528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.155559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.155940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.155971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.156365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.156728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.156758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.157147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.157524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.157556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.157939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.158198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.158229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.158577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.158974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.159005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.159367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.159399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.159763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.159791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.160208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.160240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.160500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.160532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.160876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.160904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.161275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.161308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.161677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.161706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.162012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.162040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.162414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.162446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.162812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.162844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.163207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.163236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.163617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.163647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.164072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.164442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.164474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.164831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.165099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.165133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.212 [2024-11-26 19:20:11.166998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.212 [2024-11-26 19:20:11.167063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.212 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.167389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.167424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.167836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.168231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.168263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.168640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.168670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.169009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.169393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.169425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.169773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.169802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.170172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.170203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.170552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.170580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.170934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.170963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.171327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.171358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.171714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.171742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.172098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.172130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.172522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.172554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.172912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.172943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.173316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.173348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.173686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.173715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.174135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.174177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.174537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.174567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.174902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.174934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.175292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.175321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.175693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.175723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.176084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.176113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.176495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.176530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.176954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.176984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.177331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.177362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.177721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.177749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.178117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.178145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.178555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.178585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.178962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.178991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.179363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.179393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.179746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.179774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.180134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.180172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.180533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.180561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.180930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.181324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.181354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.181717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.181751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.213 qpair failed and we were unable to recover it. 00:29:54.213 [2024-11-26 19:20:11.182135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.213 [2024-11-26 19:20:11.182195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.182550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.182579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.182994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.183022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.183367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.183398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.183738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.183767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.184115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.184144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.184537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.184941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.184969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.185364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.185394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.185753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.185783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.186145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.186184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.186559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.186587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.186948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.186976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.187339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.187369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.187728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.187756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.188042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.188071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.188310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.188343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.188586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.188615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.188964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.188992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.189334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.189365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.189721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.189750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.190108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.190137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.190530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.190562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.190934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.190962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.191324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.191355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.191743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.192090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.192126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.192520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.192550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.192907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.192937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.193283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.193314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.193688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.193719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.194083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.194111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.194443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.194815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.194845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.195218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.195250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.195649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.195679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.196045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.196073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.196456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.196487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.214 qpair failed and we were unable to recover it. 00:29:54.214 [2024-11-26 19:20:11.196854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.214 [2024-11-26 19:20:11.196884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.197239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.197271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.197641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.197670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.198026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.198054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.198425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.198454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.198805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.198835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.199129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.199157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.199519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.199548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.199901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.199929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.200300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.200330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.200771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.200799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.201135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.201178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.201550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.201578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.201947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.201977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.202367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.202728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.202756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.203122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.203150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.203551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.203581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.203812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.203844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.204217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.204247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.204621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.204649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.205016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.205413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.205443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.205803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.205832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.206195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.206226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.206591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.206620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.206979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.207008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.207472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.207854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.207882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.208246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.208277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.208637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.208667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.209018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.209047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.209270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.209303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.209636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.210042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.210070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.210472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.210801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.210831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.211095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.211124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.215 [2024-11-26 19:20:11.211563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.215 [2024-11-26 19:20:11.211595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.215 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.211949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.211978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.212338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.212367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.212731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.212759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.213062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.213090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.213456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.213488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.213825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.213854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.214104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.214135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.214536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.214566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.214957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.215314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.215346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.215702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.215731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.216068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.216096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.216459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.216489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.216851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.216879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.217142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.217184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.217551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.217581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.217952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.217981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.218321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.218358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.218605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.218638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.219012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.219041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.219336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.219367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.219724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.219755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.220191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.220222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.220600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.220629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.221004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.221032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.221374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.221404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.221769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.222132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.222173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.222530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.222559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.222919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.222947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.223322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.223352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.223728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.223757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.224116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.224145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.216 [2024-11-26 19:20:11.224535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.216 [2024-11-26 19:20:11.224565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.216 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.224921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.224948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.225319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.225348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.225716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.225745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.226185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.226214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.226616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.226645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.227013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.227043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.227409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.227439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.227797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.227826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.228203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.228234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.228601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.228629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.228984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.229019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.229414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.229750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.229778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.230141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.230180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.230558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.230947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.230976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.231323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.231359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.231697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.231725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.232087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.232116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.232476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.232506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.232863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.232892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.233244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.233275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.233655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.233684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.234043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.234428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.234458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.234888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.234916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.235254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.235285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.235642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.235671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.236037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.236068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.236308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.236341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.236708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.236738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.237091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.237119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.237409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.237761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.237790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.238146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.238187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.238544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.238572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.238929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.238958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.239208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.217 [2024-11-26 19:20:11.239248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.217 qpair failed and we were unable to recover it. 00:29:54.217 [2024-11-26 19:20:11.239677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.240054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.240084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.240441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.240471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.240830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.240859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.241218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.241248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.241616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.241644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.242015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.242044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.242410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.242439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.242702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.242730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.243090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.243118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.243477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.243507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.243875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.243904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.244243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.244272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.244644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.244675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.244932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.244961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.245381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.245411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.245762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.245790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.246237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.246266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.246601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.246630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.247039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.247068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.247402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.247432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.247812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.247840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.248202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.248233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.248625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.249024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.249053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.249416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.249447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.249802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.249830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.250236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.250266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.250704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.250732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.251092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.251122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.251512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.251542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.251796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.251824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.252183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.252214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.252555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.252584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.252941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.252969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.253332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.253362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.253725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.253754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.254119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.254148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.218 [2024-11-26 19:20:11.254519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.218 [2024-11-26 19:20:11.254547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.218 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.254858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.254885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.255245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.255276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.255638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.255667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.256002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.256031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.256390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.256420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.256780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.256808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.257211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.257601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.257628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.257994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.258023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.258388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.258418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.258758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.258788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.259148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.259208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.259632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.260005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.260034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.260398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.260428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.260817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.261228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.261257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.261630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.261658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.262000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.262027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.262461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.262492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.262851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.263248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.263278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.263643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.263671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.263980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.264010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.264388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.264417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.264783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.264812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.265182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.265211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.265595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.265624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.266025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.266294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.266324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.266680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.266708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.267091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.267490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.267519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.267884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.267913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.268278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.268308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.268672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.268702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.269067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.269096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.219 qpair failed and we were unable to recover it. 00:29:54.219 [2024-11-26 19:20:11.269439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.219 [2024-11-26 19:20:11.269469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.269829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.269859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.270216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.270246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.270528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.270556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.270920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.270950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.271304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.271337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.271691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.271718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.272083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.272112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.272470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.272500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.272861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.272889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.273255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.273285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.273656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.273686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.274046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.274074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.274446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.274475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.274827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.274855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.275218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.275248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.275618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.275646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.276011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.276040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.276417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.276451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.276837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.276865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.277220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.277622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.277650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.278019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.278048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.278415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.278445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.278805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.278834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.279202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.279232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.279633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.279660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.279996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.280024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.280401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.280432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.280834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.281197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.281227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.281599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.281627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.282031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.282387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.282417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.282817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.282846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.283205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.283233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.283622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.283650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.284013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.284042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.284418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.220 [2024-11-26 19:20:11.284447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.220 qpair failed and we were unable to recover it. 00:29:54.220 [2024-11-26 19:20:11.284833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.284864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.285214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.285244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.285615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.285643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.286021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.286049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.286419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.286456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.286794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.286823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.287062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.287094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.287463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.287495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.287861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.287889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.288248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.288277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.288640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.288670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.289022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.289050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.289431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.289461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.289805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.289833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.290194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.290226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.290554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.290582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.290930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.290958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.291323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.291354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.291689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.291719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.291934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.291962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.292335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.292366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.292721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.292749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.293105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.293133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.293451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.293481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.293719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.293752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.294122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.294150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.294536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.294565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.294933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.294963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.295331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.295361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.295729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.295758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.296110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.296501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.296531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.296890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.296918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.221 qpair failed and we were unable to recover it. 00:29:54.221 [2024-11-26 19:20:11.297178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.221 [2024-11-26 19:20:11.297209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.297590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.297618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.297986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.298392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.298423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.298769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.298798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.299195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.299225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.299581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.299609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.299972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.299999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.300346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.300375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.300760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.300990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.301022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.301377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.301407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.301780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.301808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.302181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.302210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.302484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.302518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.302899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.302928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.303371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.303401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.303761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.303789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.304151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.304204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.304585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.304612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.304971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.305002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.305342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.305372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.305743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.305771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.306136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.306176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.306642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.306987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.307015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.307352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.307383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.307743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.307772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.308134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.308175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.308540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.308569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.308928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.308955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.309319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.309349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.309697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.309728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.310064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.310092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.310464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.310494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.310855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.310883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.311267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.311896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.222 [2024-11-26 19:20:11.311928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.222 qpair failed and we were unable to recover it. 00:29:54.222 [2024-11-26 19:20:11.312301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.312331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.312689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.312718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.313060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.313094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.313435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.313825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.313855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.314214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.314244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.314623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.314652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.315010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.315394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.315425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.315798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.315826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.316183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.316214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.316570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.316599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.316972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.317000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.317354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.317384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.317726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.317754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.317991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.318021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.318358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.318388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.318743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.318774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.319218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.319570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.319599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.320254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.320284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.320633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.320662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.321027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.321403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.321434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.321789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.321818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.322183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.322213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.322467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.322499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.322900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.322929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.323292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.323335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.323680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.323708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.323958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.323989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.324342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.324372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.324738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.324766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.325124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.325152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.325516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.325545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.325903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.325930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.326295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.326325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.223 [2024-11-26 19:20:11.326688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.223 [2024-11-26 19:20:11.326717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.223 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.327071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.327099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.327466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.327495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.327846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.327875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.328217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.328247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.328671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.328699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.329061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.329090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.329451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.329480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.329819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.329848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.330134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.330177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.330419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.330451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.330827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.330857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.331204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.331235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.331544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.331932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.331961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.332368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.332399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.332754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.332782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.333147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.333189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.333543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.333571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.333930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.333959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.334299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.334329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.334725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.335085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.335113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.335508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.335539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.335898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.335926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.336295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.336326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.336678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.336706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.337077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.337104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.337468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.337499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.337861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.337892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.338247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.338277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.338614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.338642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.339006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.339035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.339296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.339717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.339747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.340099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.340129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.340497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.340528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.340889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.340919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.341286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.224 [2024-11-26 19:20:11.341317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.224 qpair failed and we were unable to recover it. 00:29:54.224 [2024-11-26 19:20:11.341680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.341710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.342077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.342107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.342505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.342535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.342954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.342984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.343212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.343242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.343508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.343538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.343890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.343920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.344285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.344316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.344691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.344719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.345083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.345113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.345479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.345509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.345867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.345897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.346264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.346294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.346657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.346687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.347046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.347074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.347421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.347450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.347756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.348143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.348185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.348581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.348610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.348982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.349013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.349375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.349412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.349778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.349807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.350215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.350561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.350590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.350956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.350988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.351334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.351364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.351726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.351756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.352104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.352134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.352453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.352486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.352819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.352846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.353208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.353703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.353733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.354092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.354120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.225 [2024-11-26 19:20:11.354483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.225 [2024-11-26 19:20:11.354518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.225 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.354878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.354908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.355301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.355661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.355692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.356047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.356078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.356431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.356463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.356823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.356854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.357220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.357587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.357617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.357972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.358002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.358354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.358385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.358802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.358831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.359266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.359298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.359701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.359729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.360061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.360096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.360474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.360507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.360888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.360921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.361279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.361310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.361672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.361702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.362058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.362089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.362458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.362489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.362859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.362888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.363203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.363234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.363568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.363599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.363860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.363889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.364242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.364272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.364638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.364671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.365040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.365070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.365415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.365444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.365786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.365815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.366183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.366215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.366609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.366638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.366998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.367028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.367394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.367424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.226 [2024-11-26 19:20:11.367780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.226 [2024-11-26 19:20:11.367809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.226 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 19:20:11.368197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 19:20:11.368229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 19:20:11.368604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 19:20:11.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 19:20:11.368988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 19:20:11.369018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 19:20:11.369387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 19:20:11.369417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.227 [2024-11-26 19:20:11.369680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.227 [2024-11-26 19:20:11.369710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.227 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.370064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.370095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.370457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.370488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.372952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.373027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.373478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.373516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.373894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.373936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.374369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.374401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.374737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.505 [2024-11-26 19:20:11.374769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.505 qpair failed and we were unable to recover it. 00:29:54.505 [2024-11-26 19:20:11.375122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.375152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.375540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.375571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.375848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.376246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.376278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.376556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.376585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.376933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.376962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.377326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.377357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.377601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.377633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.377923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.377953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.378306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.378336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.378739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.378769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.379121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.379151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.379520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.379552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.379932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.379961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.380358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.380794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.381217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.381571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.381601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.381966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.381995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.382346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.382377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.382740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.382771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.383149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.383190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.383601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.383633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.383987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.384016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.384382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.384414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.384763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.384792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.385174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.385206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.385555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.385585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.385948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.385978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.386339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.386369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.386751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.386780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.387142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.387185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.387552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.387582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.387943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.387972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.388322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.388360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.388711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.388746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.389108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.389139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.506 [2024-11-26 19:20:11.389510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.506 [2024-11-26 19:20:11.389543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.506 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.389890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.389922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.390153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.390204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.390639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.390669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.391007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.391037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.391412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.391443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.391815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.391846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.392192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.392226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.392577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.392608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.392962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.392991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.393382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.393413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.393773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.393802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.394208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.394239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.394590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.394620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.394976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.395004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.395369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.395403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.395750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.395779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.396150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.396194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.396596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.396626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.396984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.397014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.397391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.397423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.397813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.398217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.398580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.398609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.398970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.398999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.399381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.399417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.399710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.399738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.400089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.400118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.400432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.400462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.400818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.400850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.401192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.401222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.401598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.401629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.401986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.402016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.402393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.402423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.402687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.402715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.403097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.403129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.403533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.403888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.403919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.404276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.507 [2024-11-26 19:20:11.404307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.507 qpair failed and we were unable to recover it. 00:29:54.507 [2024-11-26 19:20:11.404676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.404708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.405075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.405105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.405499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.405531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.405900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.405931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.406282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.406313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.406511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.406543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.406921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.406951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.407291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.407323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.407672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.407701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.408062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.408430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.408459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.408897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.408925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.409188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.409218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.409567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.409602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.409948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.409978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.410356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.410386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.410753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.410782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.411144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.411186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.411562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.411591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.411966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.411996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.412350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.412381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.412752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.412780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.413179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.413210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.413589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.413618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.413919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.413948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.414308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.414340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.414781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.414810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.415182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.415214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.415560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.415588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.415948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.415976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.416339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.416370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.416738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.416767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.417140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.417203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.417568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.417598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.417848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.417876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.418238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.418268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.418633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.418662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.508 [2024-11-26 19:20:11.418956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.508 qpair failed and we were unable to recover it. 00:29:54.508 [2024-11-26 19:20:11.419316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.419345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.419711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.419739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.420101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.420130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.420539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.420569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.420933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.420961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.421215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.421246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.421593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.421624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.421968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.421996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.422358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.422388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.422746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.422775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.423140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.423183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.423511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.423540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.423913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.423941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.424311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.424341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.424705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.424734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.425187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.425219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.425567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.425598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.425945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.425974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.426374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.426404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.426763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.426791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.427151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.427191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.427523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.427553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.427894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.427924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.428288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.428318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.428653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.428682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.429050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.429078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.429453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.429484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.429834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.430205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.430237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.430664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.430693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.431065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.431094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.431337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.431366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.431726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.431756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.432118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.432148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.432481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.432510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.432877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.432906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.433187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.433218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.433564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.433592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.509 qpair failed and we were unable to recover it. 00:29:54.509 [2024-11-26 19:20:11.433951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.509 [2024-11-26 19:20:11.433979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.434344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.434375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.434734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.434764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.435134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.435176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.435542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.435571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.435933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.436327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.436358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.436742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.436771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.437132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.437172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.437578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.437935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.437964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.438342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.438372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.438732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.438762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.439156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.439539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.439570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.439930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.439960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.440318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.440349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.440705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.440733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.441105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.441134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.441544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.441575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.441928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.441956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.442319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.442723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.443095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.443123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.443496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.443529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.443794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.443822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.444096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.444125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.444493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.444522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.444911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.445284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.445314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.445566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.445594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.445942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.445970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.510 [2024-11-26 19:20:11.446334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.510 [2024-11-26 19:20:11.446372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.510 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.446718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.446747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.447112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.447141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.447518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.447548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.447695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.447727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.447973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.448003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.448379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.448409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.448884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.448913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.449272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.449302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.449668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.449696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.450055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.450084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.450446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.450477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.450720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.450752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.451104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.451133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.451553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.451583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.451936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.451966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.452302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.452332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.452704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.452733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.453093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.453122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.453374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.453408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.453866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.453895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.454250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.454283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.454657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.454685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.455048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.455076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.455435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.455476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.455825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.455854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.456220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.456250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.456621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.456649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.457013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.457042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.457390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.457764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.457793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.458208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.458238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.458590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.458620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.458873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.459288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.459319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.459682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.460070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.460100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.460488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.460520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.460897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.511 [2024-11-26 19:20:11.460925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.511 qpair failed and we were unable to recover it. 00:29:54.511 [2024-11-26 19:20:11.461287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.461318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.461677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.462059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.462089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.462464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.462495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.462859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.462888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.463239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.463269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.463632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.464019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.464048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.464490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.464519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.464822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.464853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.465215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.465245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.465646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.465993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.466021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.466283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.466316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.466680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.466708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.467077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.467107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.467477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.467508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.467862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.467891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.468128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.468156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.468524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.468553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.468923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.468952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.469323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.469353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.469718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.469747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.470106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.470135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.470435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.470465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.470801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.470829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.471188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.471219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.471558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.471588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.471940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.471968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.472329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.472367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.472714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.472742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.473003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.473031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.473409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.473439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.473797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.473828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.474075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.474104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.474497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.474527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.474888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.474918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.512 [2024-11-26 19:20:11.475281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.512 [2024-11-26 19:20:11.475310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.512 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.476141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.476185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.476436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.476467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.476847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.476875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.477227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.477259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.477631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.477660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.477918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.477947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.478302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.478332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.478705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.478734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.479090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.479119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.479484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.479514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.479881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.479909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.480153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.480199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.480590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.480619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.480987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.481016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.481476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.481506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.481863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.481891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.482248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.482277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.482648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.482683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.483018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.483047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.483282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.483317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.483649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.483678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.484034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.484063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.484439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.484809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.484837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.485071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.485101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.485533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.485895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.485923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.486179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.486210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.486442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.486470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.486819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.486848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.487214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.487247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.487606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.487636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.488009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.488038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.488309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.488339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.488686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.513 [2024-11-26 19:20:11.488715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.513 qpair failed and we were unable to recover it. 00:29:54.513 [2024-11-26 19:20:11.489070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.489099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.489466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.489496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.489851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.489881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.490237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.490267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.490649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.490679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.491058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.491087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.491472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.491505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.491862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.491892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.492132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.492175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.492471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.492506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.492859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.492889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.493249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.493279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.493634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.493664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.494024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.494052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.494326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.494356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.494756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.495113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.495141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.495541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.495572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.495912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.495940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.496307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.496340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.496697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.496726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.497087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.497116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.497474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.497503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.497860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.497890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.498237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.498664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.498693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.498944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.498972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.499348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.499378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.499738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.499767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.500125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.500154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.500529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.501004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.501033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.501396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.501426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.501690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.502076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.502424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.502454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.502702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.502734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.503007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.514 [2024-11-26 19:20:11.503037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.514 qpair failed and we were unable to recover it. 00:29:54.514 [2024-11-26 19:20:11.503407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.503437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.503802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.504118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.504148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.504502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.504531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.504898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.504927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.505303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.505333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.505718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.506080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.506109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.506471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.506501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.506851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.506880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.507229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.507260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.507585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.507614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.507988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.508018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.508381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.508413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.508773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.509183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.509212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.509566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.509594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.509957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.509986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.510340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.510370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.510812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.510841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.511198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.511230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.511614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.511642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.512013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.512043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.512418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.512448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.512803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.512833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.513084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.513113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.513508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.513539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.513894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.513922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.514276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.514306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.514599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.515 [2024-11-26 19:20:11.514628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.515 qpair failed and we were unable to recover it. 00:29:54.515 [2024-11-26 19:20:11.515052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.515083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.515428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.515463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.515813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.515841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.516202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.516235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.516630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.516658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.517027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.517057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.517413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.517444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.517798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.517827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.518190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.518221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.518573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.518608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.518969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.518998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.519344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.519373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.519735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.519765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.520128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.520156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.520531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.520560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.520921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.520950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.521325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.521356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.521713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.521742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.522107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.522138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.522523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.522555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.522929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.522957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.523318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.523348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.523730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.523758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.524128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.524169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.524526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.524554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.524911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.525308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.525340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.525621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.525649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.526000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.526389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.526420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.526773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.526802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.527180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.527210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.527576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.527912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.527940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.528278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.528309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.528687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.528715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.516 [2024-11-26 19:20:11.529081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.516 [2024-11-26 19:20:11.529122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.516 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.529552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.529583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.529938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.529966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.530382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.530412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.530762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.530789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.531182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.531214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.531558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.531588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.531818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.531846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.532204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.532234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.532575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.532604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.532977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.533006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.533397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.533427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.533785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.533816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.534181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.534211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.534569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.534599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.534968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.535000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.535388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.535419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.535780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.535810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.536253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.536285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.536527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.536559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.536910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.536939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.537299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.537330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.537672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.537702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.538110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.538139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.538510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.538539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.538898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.538926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.539291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.539690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.539718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.540083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.540113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.540509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.540540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.540894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.540923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.541179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.541208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.541584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.541613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.541975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.542005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.542793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.542822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.543177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.543207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.517 [2024-11-26 19:20:11.543484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.517 [2024-11-26 19:20:11.543513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.517 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.543851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.543882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.544236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.544267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.544639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.544668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.545074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.545426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.545456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.545815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.545843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.546212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.546635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.546664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.547025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.547053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.547427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.547458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.547731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.547760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.548217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.548249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.548674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.548703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.549056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.549341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.549372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.549691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.550120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.550149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.550530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.550559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.550927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.550956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.551322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.551352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.551714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.551744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.552100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.552129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.552543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.552574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.553014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.553407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.553793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.553821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.554186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.554216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.554571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.554600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.555013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.555042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.555425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.555456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.555825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.555860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.556206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.556238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.556590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.556620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.556981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.557010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.518 [2024-11-26 19:20:11.557383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.518 [2024-11-26 19:20:11.557414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.518 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.557777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.557805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.558189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.558221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.558493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.558521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.558755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.558784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.559135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.559182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.559532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.559563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.559931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.559961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.560318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.560349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.560716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.560747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.561136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.561526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.561556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.561927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.561956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.562200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.562233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.562634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.562662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.563021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.563049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.563293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.563327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.563700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.563729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.564106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.564134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.564502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.564531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.564917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.565276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.565306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.565674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.566045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.566081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.566416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.566446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.566782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.566812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.567052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.567446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.567647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.567968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.567997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.568260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.568291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.568673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.568701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.569053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-11-26 19:20:11.569081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.519 qpair failed and we were unable to recover it. 00:29:54.519 [2024-11-26 19:20:11.569354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.569384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.569734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.569763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.570137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.570176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.570605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.570635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.571032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.571061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.571304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.571334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.571722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.571750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.571975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.572006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.572346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.572375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.572742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.573130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.573173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.575673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.575749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.576060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.576098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.576508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.576540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.576883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.576912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.577273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.577304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.577661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.577689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.578048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.578085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.578510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.578542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.578975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.579004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.579340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.579369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.579746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.579775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.580137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.580178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.580574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.580604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.580847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.580875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.581219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.581251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.581637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.581666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.581922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.581955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.582373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.582404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.582706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.582743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.583075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-11-26 19:20:11.583104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.520 qpair failed and we were unable to recover it. 00:29:54.520 [2024-11-26 19:20:11.583563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.583594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.583953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.583982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.584341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.584372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.584642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.584909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.584938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.585290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.585320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.585676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.585705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.586068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.586098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.586472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.586501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.586857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.586885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.587124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.587157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.587540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.587578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.587804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.587833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.588188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.588220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.589251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.589308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.589688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.589727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.590570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.590618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.591015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.591057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.591334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.591756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.591788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.592195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.592229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.592569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.592598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.592964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.592994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.593373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.593405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.593792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.593820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.594186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.594218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.594628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.594657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.594909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.594944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.595380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.595712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.595741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.596092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.596122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.596501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.596532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.596900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.596929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.597295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.597329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.599253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.599319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.599756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.599791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.521 [2024-11-26 19:20:11.600174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-11-26 19:20:11.600206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.521 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.600607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.600637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.601003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.601032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.601464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.601496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.601868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.601898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.602280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.602311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.602551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.602823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.602852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.603184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.603574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.603604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.603971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.604002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.604339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.604370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.604725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.604755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.604924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.604954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.605321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.605352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.605705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.605734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.606095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.606123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.606518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.606552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.606901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.606937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.607359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.607393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.607776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.608145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.608189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.608567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.608934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.608964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.609322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.609353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.609587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.609619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.609902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.609931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.610293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.610323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.610668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.610697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.610963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.610992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.611381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.611413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.522 [2024-11-26 19:20:11.611752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.522 [2024-11-26 19:20:11.611782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.522 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.612126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.612157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.612526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.612556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.612938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.612967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.613315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.613344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.613705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.614111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.614141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.614396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.614429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.614771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.614800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.615060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.615089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.615428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.615461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.615729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.615763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.616099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.616494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.616524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.616793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.617183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.617216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.617614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.617644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.618003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.618035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.618414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.618446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.618689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.618718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.619104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.619367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.619397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.619773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.619804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.620225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.620617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.620645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.620896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.620927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.621295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.621327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.621727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.622105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.622135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.622525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.622557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.622913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.622942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.623394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.623425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.623785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.623817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.624182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.523 [2024-11-26 19:20:11.624211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.523 qpair failed and we were unable to recover it. 00:29:54.523 [2024-11-26 19:20:11.624478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.624507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.624923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.624954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.625200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.625231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.625568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.625598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.625962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.625995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.626344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.626375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.626690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.626720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.627120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.627153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.627521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.627551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.627797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.628038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.628065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.628416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.628449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.628790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.628829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.629181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.629211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.629579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.629608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.629996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.630026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.630363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.630397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.630768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.630798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.631170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.631579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.631938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.631966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.632308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.632341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.632688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.632720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.633099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.633128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.633455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.633488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.633815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.633844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.634126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.634157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.634551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.634994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.635264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.635297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.635806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.635835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.636073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.636103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.636473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.636505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.636874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.636906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.637131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.637198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.637571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.524 [2024-11-26 19:20:11.637965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.524 [2024-11-26 19:20:11.637993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.524 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.638323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.638359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.638722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.638751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.639037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.639418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.639827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.639858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.640094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.640126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.640523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.640791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.640824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.641105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.641135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.641557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.641589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.641718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.641751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.642063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.642100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.642458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.642489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.642854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.642885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.643240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.643271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.643672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.643956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.643985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.644254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.644285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.644645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.644674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.645041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.645070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.645421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.645451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.645731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.645759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.646084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.646113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.646477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.646507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.646912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.647294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.647326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.647724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.647753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.648110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.648139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.648446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.648476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.525 [2024-11-26 19:20:11.648833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.525 [2024-11-26 19:20:11.648862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.525 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.649231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.649263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.649603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.649632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.649972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.650000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.650428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.650459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.650814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.650843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.651089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.651120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.651535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.651567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.651942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.651970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.652319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.652356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.652615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.652648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.652971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.653277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.653308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.653679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.653708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.654070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.654098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.654258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.654293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.654633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.654662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.655071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.655454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.655484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.655840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.655870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.656215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.656247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.656593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.656986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.657014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.657264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.657294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.657686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.657715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.658077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.658106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.658507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.658539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.658825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.658855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.659220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.659251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.659617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.659646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.659903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.659935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.660214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.660245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.660509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.660538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.660903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.660931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.661285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.661315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.661660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.661691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.662055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.526 [2024-11-26 19:20:11.662089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.526 qpair failed and we were unable to recover it. 00:29:54.526 [2024-11-26 19:20:11.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.662567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.662833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.662861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.663204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.663233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.663614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.663644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.663999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.664029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.664288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.664319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.664657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.664687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.664936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.664964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.665289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.665320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.665585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.665615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.665865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.665893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.666272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.666303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.666694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.667041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.667309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.667340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.667718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.667747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.668110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.668141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.668544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.668574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.668928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.668959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.669325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.669357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.669728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.669757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.670116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.670144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.670543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.670817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.670846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.671129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.671168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.671540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.671570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.671924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.671952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.672421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.672452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.672796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.672827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.673068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.673096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.673398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.673434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.673796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.673825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.674186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.674218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.674605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.674633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.674989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.675019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.675313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.527 [2024-11-26 19:20:11.675344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.527 qpair failed and we were unable to recover it. 00:29:54.527 [2024-11-26 19:20:11.675706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.676108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.676137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.676541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.676571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.676923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.676953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.677340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.677371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.677685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.677713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.678090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.678118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.678429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.678459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.678810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.678838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.679083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.679111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.679492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.679524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.679876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.679904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.680265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.680444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.680471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.680868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.680895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.681254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.681285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.681542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.681570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.681941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.681971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.682346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.682376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.682764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.682792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.683151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.683190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.683581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.683609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.684005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.684279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.684309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.684601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.684629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.684941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.684968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.685227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.685256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.685615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.685643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.685999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.686027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.686406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.686436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.686785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.687066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.687099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.687473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.687504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.687902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.688281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.688312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.688693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.688721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.528 qpair failed and we were unable to recover it. 00:29:54.528 [2024-11-26 19:20:11.689095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.528 [2024-11-26 19:20:11.689123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.689541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.689571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.689903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.689933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.690316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.690345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.690725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.690754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.691126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.691155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.691440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.691888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.692218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.692249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.692661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.692690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.693068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.693096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.693469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.693499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.693859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.693888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.694133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.694577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.694606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.694944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.694973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.695216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.695245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.695471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.695499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.695874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.696215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.696245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.696490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.696522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.696865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.696894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.529 [2024-11-26 19:20:11.697272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.529 [2024-11-26 19:20:11.697308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.529 qpair failed and we were unable to recover it. 00:29:54.806 [2024-11-26 19:20:11.697658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.806 [2024-11-26 19:20:11.697687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.806 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.698030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.698061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.698410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.698441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.698820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.698848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.699211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.699241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.699523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.699552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.699786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.699817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.700185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.700216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.700584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.700614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.700959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.700988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.701345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.701375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.701740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.701768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.702140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.702184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.702569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.702598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.702957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.702985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.703339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.703369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.703722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.703750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.704110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.704138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.704419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.704448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.704804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.704832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.705088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.705115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.705502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.705532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.705889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.705916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.706202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.706232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.706552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.706581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.706815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.706846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.707180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.707210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.707585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.707615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.707846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.707875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.708213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.708242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.708596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.708623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.708984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.709012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.709279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.709662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.709690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.710058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.710087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.710469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.710499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.710689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.710717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.710985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.711013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.711340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.711742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.711770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.712081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.712110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.712496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.712851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.712880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.713240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.807 [2024-11-26 19:20:11.713270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.807 qpair failed and we were unable to recover it. 00:29:54.807 [2024-11-26 19:20:11.713637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.713666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.714027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.714057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.714437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.714468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.714814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.714845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.715196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.715226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.715577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.715607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.715985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.716013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.716259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.716288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.716574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.716911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.716939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.717295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.717325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.717726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.717755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.718110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.718139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.718499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.718528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.718867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.719236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.719266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.719634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.719663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.720032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.720060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.720436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.720465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.720829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.720856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.721218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.721247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.721628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.721656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.722011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.722040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.722311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.722346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.722582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.722613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.722976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.723005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.723374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.723405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.723749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.723777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.724183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.724213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.724567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.724595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.724949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.724977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.725229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.725258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.725641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.725669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.726042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.726071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.726415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.726445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.726837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.726867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.727212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.727242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.727572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.727602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.727856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.727887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.728244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.728273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.728633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.728662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.729022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.729050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.729415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.729445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.729805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.808 [2024-11-26 19:20:11.729833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.808 qpair failed and we were unable to recover it. 00:29:54.808 [2024-11-26 19:20:11.730195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.730225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.730613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.730641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.731000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.731027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.731403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.731433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.731788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.732185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.732214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.732555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.732589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.732929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.732957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.733300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.733705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.733734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.734112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.734140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.734716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.734746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.735093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.735122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.735467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.735497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.735853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.735882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.736281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.736528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.736556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.736919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.736946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.737288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.737319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.737673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.737702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.738048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.738077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.738429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.738459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.738824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.738852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.739220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.739249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.739499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.739531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.739905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.739935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.740298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.740327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.740705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.740733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.741088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.741115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.741464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.741494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.741752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.741780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.742133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.742170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.742526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.742914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.742942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.743301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.743331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.743690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.743718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.743959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.744312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.744342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.744691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.744720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.745082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.745111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.745483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.745512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.745873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.745901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.809 [2024-11-26 19:20:11.746144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.809 [2024-11-26 19:20:11.746193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.809 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.746537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.746566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.746935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.746962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.747321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.747350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.747748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.747777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.748127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.748155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.748578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.748607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.748960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.749330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.749359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.749733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.749761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.750129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.750175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.750565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.750593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.750954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.750982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.751341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.751724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.751752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.752111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.752477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.752506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.752906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.753281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.753311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.753654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.753684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.754020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.754048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.754412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.754441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.754803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.754832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.755202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.755234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.755674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.755702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.756043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.756431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.756460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.756702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.757122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.757150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.757534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.757564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.757912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.757939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.758280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.758316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.758703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.758733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.759086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.759114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.759478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.759507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.759885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.759913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.760263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.760292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.760649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.761034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.761063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.761326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.810 [2024-11-26 19:20:11.761355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.810 qpair failed and we were unable to recover it. 00:29:54.810 [2024-11-26 19:20:11.761729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.761758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.762235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.762598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.762626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.762995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.763385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.763414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.763779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.763809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.764179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.764209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.764438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.764470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.764812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.764842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.765181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.765211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.765583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.765611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.765935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.765963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.766219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.766250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.766501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.766529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.766933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.766961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.767322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.767353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.767715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.767743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.768101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.768130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.768451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.768488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.768828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.768867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.769214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.769244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.769687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.769717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.770081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.770109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.770522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.770552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.770904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.770931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.771155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.771197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.771577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.771606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.771942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.771969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.772313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.772344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.772711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.772739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.773099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.773127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.773405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.773434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.773793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.773822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.774197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.774227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.774563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.774593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.774973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.775001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.775346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.775376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.775734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.775763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.776132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.776173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.776520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.776549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.776889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.811 [2024-11-26 19:20:11.776919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-26 19:20:11.777323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.777353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.777714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.777742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.778106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.778522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.778551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.778919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.778953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.779308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.779338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.779706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.779734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.780095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.780123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.780484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.780872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.780899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.781235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.781265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.781570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.781598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.781953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.781981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.782334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.782365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.782724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.782752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.783123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.783151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.783412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.783440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.783802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.783830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.784199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.784229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.784591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.784619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.784979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.785008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.785379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.785409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.785775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.785803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.786187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.786217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.786576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.786605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.786992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.787366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.787396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.787761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.787789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.788178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.788207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.788567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.788596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.788952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.788980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.789338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.789369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.789725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.789754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.790117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.790146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.790579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.790963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.790991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.791381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.791412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.791657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.791686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.792045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.792074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.792414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.792443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.792806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.792835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.793197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.793227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.793572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.793600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.793846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.793878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-26 19:20:11.794236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.812 [2024-11-26 19:20:11.794266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.794632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.794661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.795026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.795054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.795428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.795457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.795818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.795846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.796214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.796244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.796607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.796636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.796995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.797024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.797396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.797426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.797799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.797827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.798155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.798194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.798534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.798563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.798924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.798952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.799321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.799353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.799708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.799737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.800106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.800477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.800507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.800902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.801256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.801286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.801613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.801650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.801997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.802026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.802390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.802420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.802812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.803182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.803212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.803573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.803601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.803978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.804007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.804387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.804416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.804799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.805179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.805214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.805554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.805584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.805963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.805992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.806252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.806281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.806664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.806692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.807048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.807077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.807320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.807351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.807700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.807728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.808097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.808126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.808492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.808523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.808869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.808898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.809288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.809541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.809572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.809932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.809960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.810307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.810338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.810683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.810711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.811113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.811140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-26 19:20:11.811512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.813 [2024-11-26 19:20:11.811542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.811910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.811937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.812300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.812330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.812755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.812783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.813150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.813189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.813443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.813471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.813846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.814204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.814235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.814645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.815006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.815035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.815370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.815406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.815779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.815808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.816178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.816208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.816570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.816598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.816847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.817123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.817152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.817530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.817559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.817921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.817948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.818310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.818340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.818774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.818803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.819189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.819218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.819579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.819608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.819976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.820004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.820378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.820408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.820774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.820803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.821171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.821554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.821582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.821940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.821968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.822193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.822604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.822632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.822998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.823027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.823396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.823425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.823785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.823813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.824192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.824221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.824583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.824976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.825397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.825426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.825778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.825818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.826192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.826222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.826593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.826622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.826973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.814 [2024-11-26 19:20:11.827001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.814 qpair failed and we were unable to recover it. 00:29:54.814 [2024-11-26 19:20:11.827357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.827386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.827754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.827784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.828148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.828195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.828625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.828985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.829013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.829384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.829415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.829784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.829812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.830065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.830093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.830495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.830525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.830862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.830893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.831232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.831262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.831628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.831658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.832029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.832058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.832480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.832509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.832868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.832899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.833137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.833185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.833540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.833569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.833941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.833971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.834317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.834347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.834690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.834718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.835115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.835452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.835482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.835884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.835911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.836285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.836316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.836714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.836741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.837142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.837527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.837557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.837910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.837940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.838269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.838300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.838657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.838687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.839044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.839073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.839436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.839468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.839831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.839861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.840221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.840251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.840492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.840526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.840879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.840910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.841261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.841292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.841649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.841685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.842081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.842112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.842494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.842526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.842886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.842916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.843293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-11-26 19:20:11.843325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-11-26 19:20:11.843683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.843712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.844074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.844110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.844551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.844583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.844945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.844975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.845312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.845341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.845752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.846144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.846544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.846917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.846948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.847321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.847353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.847704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.848138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.848180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.848630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.848659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.848958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.848986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.849235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.849264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.849662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.849690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.850081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.850111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.850577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.850608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.850979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.851009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.851367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.851397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.851776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.852209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.852637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.852672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.853035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.853066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.853446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.853476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.853853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.853884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.854229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.854258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.854615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.854643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.855038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.855068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.855415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.855445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.855816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.855846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.856197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.856627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.856655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.857004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.857032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.857392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.857423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.857809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.857838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.858207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.858238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.858637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.858670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.859013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.859043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.859286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.859317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.859679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.859708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.860105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.860133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.860604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.860635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.860984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.861014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-11-26 19:20:11.861389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-11-26 19:20:11.861418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.861679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.861708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.862099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.862127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.862543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.862573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.862933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.862962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.863358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.863395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.863774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.863808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.864203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.864624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.864653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.865004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.865035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.865494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.865525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.865876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.865905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.866248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.866279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.866632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.866660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.867020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.867050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.867414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.867446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.867797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.867826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.868189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.868219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.868559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.868589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.868945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.868975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.869239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.869269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.869620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.870011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.870042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.870287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.870322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.870677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.870706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.871133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.871196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.871571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.871603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.871958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.871990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.872344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.872378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.872725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.872755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.873127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.873157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.873510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.873541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.873892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.873923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.874195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.874596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.874627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.875004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.875343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.875374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.875732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.875764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.876019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.876048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.876408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.876438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.876827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.876857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.877080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.877110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.877456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.877487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.877844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.877875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.878227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.878261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-11-26 19:20:11.878633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-11-26 19:20:11.878663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.879022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.879055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.879409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.879442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.879774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.879803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.880174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.880209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.880571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.880605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.880955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.880985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.881460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.881797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.881829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.882623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.882653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.883018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.883047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.883410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.883442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.883831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.883861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.884095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.884129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.884509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.884541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.884901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.884934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.885226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.885257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.885635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.885665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.885909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.885940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.886295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.886724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.886754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.887111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.887143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.887517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.887548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.887899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.887929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.888286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.888317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.888684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.888715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.889092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.889121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.889529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.889566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.889925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.889956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.890317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.890351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.890757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.890789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.891033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.891061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.891431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.891463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.891701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.891736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.892117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.892146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.892509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.892540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.892907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.892939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.893298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.893327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.893705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.893735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.894071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.894099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-11-26 19:20:11.894458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-11-26 19:20:11.894491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.894748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.894777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.895132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.895176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.895573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.895602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.895979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.896008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.896383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.896415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.896784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.896812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.897184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.897213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.897565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.897593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.897968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.897996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.898244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.898276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.898684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.898713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.899041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.899404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.899433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.899793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.899828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.900185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.900215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.900575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.900603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.901042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.901069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.901401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.901432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.901801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.901828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.902201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.902233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.902579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.902607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.902933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.902962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.903330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.903359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.903723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.903751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.904116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.904144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.904483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.904512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.904872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.904900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.905294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.905324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.905694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.905722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.906179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.906209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.906578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.906985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.907013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.907364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.907394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.907781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.908132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.908171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.908571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.908600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.908927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.908956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.909256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.909285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.909610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.909638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.910003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.910381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.910416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.910812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.910839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.911179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.911209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.911565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.911593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.912054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.912421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.912450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.912824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-11-26 19:20:11.912853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-11-26 19:20:11.913213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.913264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.913617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.913645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.913988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.914016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.914415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.914792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.914819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.915180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.915211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.915467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.915495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.915875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.915904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.916353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.916383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.916746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.916774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.917129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.917172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.917546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.917575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.917816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.917848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.918215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.918245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.918611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.918638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.919006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.919033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.919397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.919428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.919784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.919813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.920180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.920210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.920569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.920597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.920962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.920990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.921260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.921289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.921673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.921700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.922038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.922067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.922410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.922440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.922807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.922835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.923190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.923219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.923547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.923576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.923936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.923964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.924328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.924358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.924713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.924741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.925103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.925131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.925470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.925862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.925891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.926294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.926678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.926706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.927142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.927181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.927540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.927569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.927922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.927951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.928358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.928386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.928750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.928778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.929133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.929200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.929546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.929575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.929936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.930333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.930363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.930737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.930765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-11-26 19:20:11.931133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-11-26 19:20:11.931173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.931528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.931556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.931908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.931937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.932307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.932338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.932592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.932620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.933007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.933036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.933414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.933443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.933862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.933890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.934244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.934273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.934624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.934652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.935014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.935043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.935413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.935442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.935807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.935836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.936207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.936236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.936611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.936639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.937019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.937053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.937392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.937423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.937781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.937809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.938061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.938089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.938455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.938485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.938853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.938881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.939312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.939341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.939693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.939722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.939983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.940011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.940339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.940368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.940729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.940757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.941118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.941146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.941530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.941558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.941914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.941942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.942303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.942335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.942694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.942722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.943067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.943094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.943457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.943486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.943863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.943891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.944277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.944652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.944680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.945050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.945078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.945450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.945479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.945830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.945858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.946211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.946243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.946474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.946506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.946871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.946898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.947253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.947289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.947686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.948024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.948053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.948385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.948415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.948780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.948809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.949209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.949568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-11-26 19:20:11.949596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-11-26 19:20:11.949966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.949996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.950342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.950373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.950736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.950766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.951133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.951170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.951527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.951555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.951919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.951948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.952308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.952337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.952699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.952726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.953088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.953116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.953478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.953507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.953873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.953901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.954338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.954368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.954722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.954750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.955135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.955174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.955540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.955568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.955933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.955961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.956326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.956355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.956717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.956745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.957118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.957145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.957549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.957874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.957901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.958142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.958182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.958538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.958567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.958930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.958961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.959317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.959347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.959607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.959634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.959868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.959900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.960317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.960347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.960704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.960732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.961090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.961118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.961466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.961497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.961871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.961899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.962263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.962293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.962721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.963056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.963441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.963471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.963826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.963857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.964214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.964243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.964511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.964539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.964892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.964921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.965358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.965386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.965736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.965766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.966108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-11-26 19:20:11.966137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-11-26 19:20:11.966507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.966536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.966897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.966925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.967297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.967327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.967726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.968086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.968114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.968495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.968525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.968888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.968917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.969318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.969699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.969727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.970068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.970095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.970522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.970552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.970909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.970937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.971304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.971333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.971591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.971623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.971865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.971893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.972244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.972273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.972638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.972666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.972988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.973016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.973388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.973424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.973681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.973709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.974140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.974197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.974563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.974591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.974957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.974985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.975339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.975369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.975668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.975696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.976046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.976075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.976401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.976430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.976787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.976816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.977186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.977215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.977577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.977606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.977984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.978011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.978382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.978411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.978747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.978776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.979139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.979180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.979534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.979562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.979911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.979939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.980282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.980313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.980691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.980719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.981089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.981117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.981524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.981554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.981917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.981945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.982201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.982231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.982571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.982600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.982967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.982995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.983369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.983399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.983751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.983790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.984132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.984170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.984540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.984797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.984825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.985195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.985225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.985451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.985478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-11-26 19:20:11.985862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-11-26 19:20:11.985890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.986257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.986287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.986538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.986570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.986909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.986937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.987301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.987330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.987584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.987612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.987963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.987990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.988250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.988278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.988655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.988684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.989016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.989045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.989394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.989424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.989732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.989768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.990197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.990227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.990572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.990602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.991267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.991295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.991666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.991694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.992051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.992080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.992453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.992482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.992863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.992890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.993257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.993288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.993661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.993689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.994050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.994079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.994453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.994817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.994846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.995216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.995245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.995605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.995928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.995956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.996303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.996333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.996514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.996543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.996907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.996935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.997283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.997314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.997558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.997926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.997955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.998327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.998357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.998735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.998764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.999175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.999611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:11.999639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:11.999996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:12.000024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-11-26 19:20:12.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-11-26 19:20:12.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.000793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.000825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.001194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.001224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.001472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.001500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.001872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.001900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.002331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.002360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.002730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.002758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.003129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.003156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.003597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.003625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.004000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.004028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.004309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.004340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.004662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.004691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.005038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.005066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.005439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.005681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.102 [2024-11-26 19:20:12.005713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.102 qpair failed and we were unable to recover it. 00:29:55.102 [2024-11-26 19:20:12.005964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.005992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.006340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.006369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.006736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.006764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.007132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.007169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.007519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.007547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.007794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.007822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.008195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.008225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.008625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.008654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.009017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.009052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.009291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.009321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.009673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.009702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.010065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.010093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.010455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.010486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.010838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.010865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.011229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.011258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.011495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.011526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.011886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.012278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.012309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.012677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.012704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.013067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.013095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.013470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.013499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.013852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.013879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.014246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.014277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.014646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.014675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.015046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.015074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.015435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.015465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.015810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.015838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.016188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.016217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.016547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.016575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.016935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.016964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.017361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.017393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.017764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.017793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.103 [2024-11-26 19:20:12.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.103 [2024-11-26 19:20:12.018079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.103 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.018329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.018728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.019135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.019181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.019512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.019541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.019903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.019931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.020288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.020318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.020709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.021070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.021098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.021453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.021482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.021833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.021861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.022207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.022237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.022596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.022625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.022982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.023010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.023355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.023384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.023748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.023776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.024144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.024192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.024548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.024578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.024945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.024974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.025337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.025368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.025745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.025773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.026136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.026174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.026535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.026564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.026928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.026956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.027319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.027349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.027720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.027748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.028113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.028142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.028519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.028548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.028890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.028919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.029285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.104 [2024-11-26 19:20:12.029316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.104 qpair failed and we were unable to recover it. 00:29:55.104 [2024-11-26 19:20:12.029696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.029729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.029994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.030022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.030410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.030440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.030811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.030839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.031216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.031245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.031672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.032137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.032174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.032453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.032481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.032858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.033218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.033249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.033589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.033617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.033994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.034022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.034401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.034430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.034821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.034850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.035238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.035616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.035644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.036012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.036041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.036418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.036447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.036787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.036816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.037179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.037210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.037560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.037588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.037950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.037979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.038354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.038384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.038748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.038775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.039036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.039067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.039412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.039442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.039830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.039858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.040221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.105 [2024-11-26 19:20:12.040250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.105 qpair failed and we were unable to recover it. 00:29:55.105 [2024-11-26 19:20:12.040617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.040646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.041011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.041040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.041418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.041448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.041829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.041858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.042218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.042248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.042601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.042629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.042999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.043026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.043282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.043311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.043585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.043976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.044006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.044398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.044663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.044691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.045036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.045063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.045409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.045440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.045782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.045810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.046171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.046201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.046557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.046585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.046946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.047343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.047372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.047747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.048134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.048182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.048544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.048572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.048911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.048939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.049300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.049332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.049663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.049691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.050057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.050086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.050425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.050453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.050794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.106 [2024-11-26 19:20:12.050823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.106 qpair failed and we were unable to recover it. 00:29:55.106 [2024-11-26 19:20:12.051192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.051222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.051639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.051668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.052023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.052051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.052297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.052331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.052566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.052594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.052948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.052976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.053351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.053380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.053728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.054122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.054150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.054515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.054546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.054914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.054943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.055286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.055317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.055730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.056087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.056115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.056556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.056586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.056951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.056980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.057365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.057729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.058127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.058155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.058449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.058827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.058855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.059221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.059250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.059633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.059662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.060030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.060058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.060461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.060490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.060864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.060893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.061251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.061280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.061638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.061667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.107 qpair failed and we were unable to recover it. 00:29:55.107 [2024-11-26 19:20:12.062036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.107 [2024-11-26 19:20:12.062065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.062404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.062435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.062769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.062797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.063173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.063202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.063558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.063586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.063945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.063973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.064342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.064372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.064736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.064764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.065018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.065050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.065298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.065327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.065706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.065735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.066180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.066217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.066554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.066582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.066947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.066975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.067364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.067725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.067753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.068123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.068151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.068498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.068862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.068890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.069237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.069267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.069598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.069626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.069972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.070000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.070391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.070420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.070662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.108 [2024-11-26 19:20:12.070690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.108 qpair failed and we were unable to recover it. 00:29:55.108 [2024-11-26 19:20:12.071030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.071057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.071497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.071529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.071880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.071908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.072275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.072304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.072665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.072693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.072994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.073023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.073378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.073408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.073795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.073822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.074179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.074208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.074537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.074566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.074945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.075336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.075364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.075772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.075800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.076195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.076225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.076622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.076651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.077059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.077087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.077421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.077453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.077816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.077845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.078226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.078256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.078602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.078634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.078871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.078899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.079237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.079265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.079619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.079647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.079918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.079946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.080323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.080696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.080724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.081093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.081121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.081513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.081544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.081921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.109 [2024-11-26 19:20:12.081950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.109 qpair failed and we were unable to recover it. 00:29:55.109 [2024-11-26 19:20:12.082307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.082337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.082690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.082718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.083064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.083095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.083464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.083494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.083834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.083862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.084228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.084258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.084636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.084665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.085018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.085046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.085391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.085422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.085782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.085810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.086054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.086082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.086452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.086482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.086878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.086906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.087333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.087365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.087614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.087644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.088008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.088457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.088487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.088834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.088863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.089217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.089247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.089616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.089644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.089995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.090023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.090268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.090298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.090653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.090682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.091049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.091077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.091436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.091464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.091834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.091864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.092224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.092260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.092642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.092671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.093007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.093036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-26 19:20:12.093392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-26 19:20:12.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.093786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.093815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.094070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.094474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.094504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.094838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.094866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.095133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.095174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.095520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.095549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.095908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.095936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.096308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.096340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.096660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.096688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.097114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.097142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.097482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.097512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.097889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.097919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.098275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.098306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.098621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.098649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.099032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.099063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.099306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.099336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.099720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.100143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.100188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.100518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.100809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.100839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.101188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.101220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.101665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.101693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.102061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.102092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.102332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.102724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.102751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.103115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.103145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.103534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.103563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.103921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.103949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.104298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.104328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-26 19:20:12.104708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-26 19:20:12.104739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.105106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.105540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.105782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.105810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.106157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.106201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.106442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.106470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.106850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.106878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.107233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.107644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.107675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.108037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.108067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.108323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.108357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.108751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.108782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.109138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.109180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.109514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.109543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.109914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.109945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.110307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.110338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.110694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.110724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.111098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.111129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.111497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.111528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.111897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.111925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.112279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.112312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.112544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.112586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.112980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.113010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.113257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.113287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.113671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.113701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.114066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.114097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.114457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.114486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.114845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.115234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.115264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.115668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.116021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.116051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-26 19:20:12.116410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-26 19:20:12.116439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.116741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.116771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.117128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.117170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.117487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.117515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.117874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.117905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.118175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.118207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.118552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.118580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.118940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.118971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.119274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.119307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.119667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.119696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.120062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.120092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.120332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.120364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.120714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.120744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.121131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.121527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.121556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.121917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.121946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.122296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.122327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.122693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.122723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.123091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.123120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.123487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.123517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.123769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.123797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.124153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.124212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.124585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.124615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.124977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.125008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.125387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.125417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.125716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.125746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.126093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.126122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.126535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.126564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.126996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.127404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.127435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-26 19:20:12.127786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-26 19:20:12.127814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.128180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.128212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.128574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.128604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.128968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.128998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.129336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.129367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.129728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.129758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.130121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.130151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.130527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.130556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.130811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.130841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.131190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.131221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.131473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.131508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.131885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.131915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.132319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.132352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.132711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.132740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.133104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.133133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.133514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.133545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.133903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.133933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.134295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.134327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.134557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.134588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.134939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.134968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.135314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.135346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.135710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.135741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.136114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.136143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.136401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.136435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.136835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.137208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.114 [2024-11-26 19:20:12.137240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.114 qpair failed and we were unable to recover it. 00:29:55.114 [2024-11-26 19:20:12.137622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.137653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.137996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.138024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.138393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.138436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.138775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.138804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.139151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.139196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.139558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.139585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.139971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.140356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.140384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.140741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.140768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.141136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.141176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.141527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.141554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.141956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.141985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.142307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.142670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.142700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.143058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.143086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.143465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.143496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.143855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.143886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.144244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.144277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.144646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.144679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.145031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.145061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.145420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.145454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.145821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.145853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.146257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.146290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.146650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.146680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.147035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.147064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.147432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.147462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.147819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.147849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.148214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.148244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.148504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.148533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.148897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.148932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.115 [2024-11-26 19:20:12.149293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.115 [2024-11-26 19:20:12.149324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.115 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.149690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.149719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.150077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.150105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.150464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.150494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.150856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.150885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.151241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.151272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.151625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.151654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.152026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.152055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.152415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.152446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.152807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.152836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.153199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.153230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.153606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.153634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.153995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.154392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.154424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.154779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.154808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.155182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.155213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.155458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.155490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.155826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.155855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.156218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.156249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.156622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.156650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.157011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.157039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.157389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.157419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.157751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.157779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.158144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.158183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.158614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.158642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.158985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.159015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.159353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.159382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.159749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.159778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.160134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.160186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.160415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.160447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.160785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.160814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.116 [2024-11-26 19:20:12.161185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.116 [2024-11-26 19:20:12.161216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.116 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.161568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.161596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.161926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.161954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.162315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.162345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.162714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.162742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.163104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.163131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.163486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.163516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.163912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.164286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.164316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.164658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.164688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.165046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.165075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.165433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.165463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.165893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.165921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.166175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.166208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.166582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.166611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.166989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.167362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.167392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.167749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.167778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.168150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.168195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.168568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.168597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.168960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.168989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.169340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.169369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.169731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.169759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.170137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.170175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.170530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.170560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.170810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.170838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.171187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.171218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.171582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.171610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.171956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.171984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.172340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.172370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.172749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.172777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.173139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.117 [2024-11-26 19:20:12.173179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.117 qpair failed and we were unable to recover it. 00:29:55.117 [2024-11-26 19:20:12.173529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.173558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.173917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.173945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.174341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.174578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.174609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.174948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.174984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.175340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.175371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.175732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.175761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.176131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.176174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.176569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.176833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.176861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.177215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.177245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.177616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.177645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.177936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.177964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.178716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.178745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.179084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.179112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.179440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.179469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.179829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.179858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.180193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.180224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.180610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.180640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.180994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.181022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.181384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.181414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.181765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.181793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.182154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.182209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.182560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.182588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.183020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.183048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.183385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.183416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.183746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.183775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.118 qpair failed and we were unable to recover it. 00:29:55.118 [2024-11-26 19:20:12.184217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.118 [2024-11-26 19:20:12.184247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.184539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.184936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.184964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.185307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.185344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.185738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.185767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.186126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.186154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.186496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.186524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.186897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.186926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.187287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.187316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.187729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.187757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.188111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.188140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.188482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.188511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.188771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.188799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.189145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.189185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.189526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.189563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.189887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.189915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.190285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.190315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.190734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.190762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.191149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.191397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.191429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.191775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.191803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.192184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.192215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.192559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.192587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.192934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.192962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.193324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.193353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.193731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.193759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.194120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.194148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.194407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.194435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.194743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.195113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.195142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.119 qpair failed and we were unable to recover it. 00:29:55.119 [2024-11-26 19:20:12.195491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.119 [2024-11-26 19:20:12.195529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.195893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.195921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.196199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.196230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.196601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.196631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.196968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.196996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.197394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.197761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.197789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.198139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.198178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.198569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.198599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.199036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.199063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.199403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.199433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.199795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.199823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.200191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.200221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.200579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.200608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.200969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.200999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.201391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.201420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.201770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.201800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.202171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.202201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.202549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.202578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.202926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.202954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.203316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.203347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.203729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.203757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.204190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.204220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.204573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.204601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.204979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.205008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.205384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.205413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.120 [2024-11-26 19:20:12.205755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.120 [2024-11-26 19:20:12.205784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.120 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.206154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.206194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.206466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.206494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.206838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.207241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.207271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.207642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.207670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.208031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.208059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.208420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.208450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.208808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.208836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.209197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.209228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.209623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.209651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.209902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.209930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.210277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.210306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.210546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.210577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.210940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.210969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.211333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.211364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.211728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.211756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.212116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.212145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.212461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.212831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.213215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.213245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.213615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.213643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.214003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.214381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.214411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.214776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.214804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.215176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.215576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.215605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.215894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.215922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.216284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.216314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.216670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.216698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.217060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.217446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.217476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.217859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.217886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.121 qpair failed and we were unable to recover it. 00:29:55.121 [2024-11-26 19:20:12.218226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.121 [2024-11-26 19:20:12.218256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.218551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.218940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.218968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.219311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.219342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.219709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.219737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.220104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.220132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.220509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.220910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.220938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.221299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.221330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.221691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.221725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.222065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.222096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.222444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.222473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.222836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.222865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.223233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.223262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.223629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.223659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.224028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.224056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.224430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.224459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.224824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.224852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.225221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.225268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.225627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.225655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.225991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.226019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.226295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.226691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.226720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.227080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.227109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.227449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.227479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.227837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.227867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.228233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.228264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.228634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.228662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.229025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.229054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.229416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.229445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.229802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.229830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.230195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.230224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.122 [2024-11-26 19:20:12.230590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.122 [2024-11-26 19:20:12.230618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.122 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.230961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.230991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.231339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.231369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.231725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.231754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.232112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.232147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.232465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.232494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.232861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.233267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.233297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.233665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.233694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.234064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.234091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.234457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.234486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.234841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.234869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.235272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.235301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.235656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.235684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.236082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.236110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.236474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.236503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.236792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.237147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.237188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.237554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.237584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.237988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.238016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.238317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.238347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.238684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.239076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.239104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.239464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.239493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.239850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.239880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.240222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.240252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.240618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.240648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.240980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.241008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.241382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.241683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.123 [2024-11-26 19:20:12.242052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.123 [2024-11-26 19:20:12.242081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.123 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.242506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.242536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.242893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.242922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.243323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.243673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.243701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.244073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.244101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.244459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.244489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.244839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.244867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.245239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.245269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.245606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.245634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.246024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.246409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.246438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.246789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.246817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.247216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.247589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.247617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.247970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.247999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.248341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.248371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.248736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.248765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.249123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.249152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.249521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.249549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.249940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.250303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.250333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.250687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.250715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.251071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.251099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.251464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.251494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.251854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.251881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.252228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.252257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.252613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.252640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.253000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.253027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.124 qpair failed and we were unable to recover it. 00:29:55.124 [2024-11-26 19:20:12.253396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.124 [2024-11-26 19:20:12.253426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.253786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.253813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.254181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.254211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.254575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.254603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.254964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.254991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.255475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.255505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.255863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.255892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.256294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.256635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.256665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.257027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.257055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.257424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.257711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.257738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.258099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.258127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.258496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.258532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.258872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.258902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.259288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.259317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.259690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.259718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.260054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.260082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.260448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.260478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.260912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.260940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.261309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.261338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.261698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.261726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.262093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.262121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.262487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.262517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.125 qpair failed and we were unable to recover it. 00:29:55.125 [2024-11-26 19:20:12.262878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.125 [2024-11-26 19:20:12.262906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.263311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.263341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.263684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.263713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.264078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.264107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.264368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.264398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.264737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.264773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.265129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.265169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.265526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.265914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.265942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.266302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.266331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.266701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.266731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.267101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.267131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.267521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.267550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.267914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.267943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.268305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.268336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.268766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.268794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.269129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.269175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.269550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.269579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.269951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.269980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.270246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.270275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.270542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.270574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.270917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.270947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.271291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.271320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.271685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.271713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.272078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.272473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.272503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.272722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.272754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.273024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.273053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.273374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.126 [2024-11-26 19:20:12.273404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.126 qpair failed and we were unable to recover it. 00:29:55.126 [2024-11-26 19:20:12.273764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.273792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.274176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.274468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.274496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.274844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.274873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.275235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.275264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.275618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.275646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.276013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.276042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.276466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.276495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.276850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.276879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.277240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.277270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.277651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.277679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.277938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.277966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.278319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.278710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.278737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.279089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.279123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.279515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.279545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.279908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.279936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.280288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.280318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.280686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.280714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.281084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.281113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.281454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.281483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.281838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.281868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.282231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.282261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.282628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.282657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.283102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.283130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.283511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.283543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.283918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.283946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.284323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.284697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.284726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.285086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.285114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.127 qpair failed and we were unable to recover it. 00:29:55.127 [2024-11-26 19:20:12.285477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.127 [2024-11-26 19:20:12.285507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.285900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.286269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.286299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.286695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.286723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.287079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.287482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.287511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.287816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.287844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.288193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.288222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.288583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.288611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.288965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.288993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.289339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.289368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.289764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.290120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.290149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.290524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.290553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.290924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.290952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.291315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.291345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.291720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.291748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.292115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.292145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.292539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.292569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.292930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.292959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.293321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.293706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.293734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.294097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.294126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.294453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.294483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.294847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.294876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.128 [2024-11-26 19:20:12.295238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.128 [2024-11-26 19:20:12.295269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.128 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.295631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.295662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.296032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.296062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.296398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.296429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.296793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.296822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.297080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.297111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.297483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.297513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.297872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.297901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.298264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.298294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.298667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.298695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.299060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.299089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.299452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.299482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.299835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-26 19:20:12.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-26 19:20:12.300127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.300155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.300459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.300488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.300833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.300862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.301223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.301253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.301621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.301649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.302015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.302043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.302414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.302443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.302792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.302820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.303196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.303226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.303607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.304341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.304371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.304735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.304763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.305120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.305147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.305518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.305553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.305911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.305939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.306299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.306327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.306701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.306729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.307128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.307155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.307499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.307527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.307923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.307952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.308309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.308339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.308703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.308731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.309099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.309126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.309481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.309511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.309857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.309886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.310246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.310276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.310648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.310678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.311037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.311066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.311427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.311456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.311800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.405 [2024-11-26 19:20:12.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.405 qpair failed and we were unable to recover it. 00:29:55.405 [2024-11-26 19:20:12.312188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.312219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.312576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.312604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.312970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.312999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.313380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.313409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.313749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.313777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.314144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.314184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.314532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.314559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.314860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.314888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.315320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.315349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.315723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.315751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.316118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.316152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.316518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.316548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.316901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.316929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.317298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.317329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.317689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.317717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.318063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.318090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.318455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.318484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.318857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.318884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.319245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.319274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.319716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.319745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.320071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.320100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.320376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.320405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.320753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.320781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.321141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.321183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.321519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.321547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.321891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.321920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.322289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.322319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.322685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.323145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.323183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.323532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.323560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.323921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.406 [2024-11-26 19:20:12.323949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.406 qpair failed and we were unable to recover it. 00:29:55.406 [2024-11-26 19:20:12.324314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.324706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.324734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.325092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.325120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.325518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.325867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.326263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.326294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.326661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.326689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.326947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.326974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.327327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.327356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.327732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.327760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.328118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.328145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.328454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.328482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.328841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.328869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.329231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.329596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.329624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.330073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.330101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.330505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.330535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.330893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.330921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.331276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.331305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.331686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.331714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.332082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.332111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.332479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.332511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.332857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.332885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.333318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.333347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.333673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.333701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.334064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.334092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.334457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.334846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.334874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.335150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.407 qpair failed and we were unable to recover it. 00:29:55.407 [2024-11-26 19:20:12.335562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.407 [2024-11-26 19:20:12.335591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.335979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.336348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.336378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.336745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.336773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.337136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.337173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.337532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.337561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.337900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.337929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.338297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.338327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.338698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.338727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.339086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.339115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.339523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.339552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.339912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.339942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.340307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.340338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.340660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.340689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.341048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.341076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.341418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.341449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.341807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.341836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.342204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.342235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.342470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.342508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.342888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.342917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.343274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.343305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.343666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.343695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.344001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.344029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.344380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.344412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.344821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.344852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.345192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.345224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.345543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.345572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.345933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.345963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.346324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.346354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.346709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.346738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.347104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.408 [2024-11-26 19:20:12.347134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.408 qpair failed and we were unable to recover it. 00:29:55.408 [2024-11-26 19:20:12.347521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.347550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.347960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.347991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.348396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.348426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.348765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.348795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.349208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.349555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.349583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.349978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.350343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.350373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.350742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.350770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.351135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.351176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.351576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.351606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.351977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.352006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.352445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.352475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.352828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.352857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.353221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.353259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.353616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.353645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.354005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.354034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.354391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.354423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.354662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.354696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.355068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.355097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.355488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.355520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.355919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.355947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.356243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.356610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.356641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.356997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.357027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.357379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.357409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.357769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.357798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.358103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.358131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.358503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.409 [2024-11-26 19:20:12.358533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.409 qpair failed and we were unable to recover it. 00:29:55.409 [2024-11-26 19:20:12.358891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.358921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.359292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.359323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.359690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.359719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.360091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.360121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.360366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.360401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.360682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.360712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.361057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.361087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.361446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.361476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.361879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.361909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.362276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.362306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.362678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.362708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.363053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.363083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.363360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.363396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.363772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.363800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.364174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.364204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.364454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.364489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.364923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.364953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.365301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.365331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.365688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.365718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.366079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.366109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.366547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.366932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.366960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.367304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.367334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.410 [2024-11-26 19:20:12.367701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.410 [2024-11-26 19:20:12.367730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.410 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.368093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.368121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.368508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.368539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.368783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.368812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.369236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.369611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.369642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.370042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.370070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.370402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.370433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.370783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.370812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.371183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.371213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.371583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.371612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.371894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.372269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.372301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.372651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.373038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.373067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.373410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.373441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.373818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.373848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.374217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.374248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.374586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.374616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.374977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.375006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.375395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.375424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.375784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.375813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.376181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.376214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.376613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.376643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.376997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.377392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.377424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.377791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.377823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.378173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.378204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.378544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.378573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.411 qpair failed and we were unable to recover it. 00:29:55.411 [2024-11-26 19:20:12.378940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.411 [2024-11-26 19:20:12.378969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.379320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.379351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.379718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.379747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.380111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.380140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.380533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.380563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.380919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.381309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.381340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.381705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.381733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.382091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.382120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.382484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.382514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.382874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.382904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.383235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.383656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.384015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.384045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.384426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.384456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.384801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.384833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.385065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.385097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.385383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.385413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.385784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.385814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.386187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.386219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.386651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.386679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.387003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.387032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.387383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.387413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.387772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.387802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.388065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.388093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.388449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.388479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.388826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.388855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.389217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.389248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.389618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.389654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.389995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.390023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.390365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.390395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.390760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.390789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.391148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.391191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.412 [2024-11-26 19:20:12.391411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.412 [2024-11-26 19:20:12.391440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.412 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.391805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.391833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.392197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.392229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.392629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.392657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.393027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.393055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.393411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.393444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.393778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.393807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.394175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.394206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.394552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.394582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.394947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.394976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.395321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.395351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.395715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.395744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.396133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.396519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.396549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.396907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.396939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.397294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.397325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.397693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.397722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.397957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.397989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.398379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.398411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.398808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.398837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.399192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.399221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.399633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.399661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.400020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.400054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.400391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.400421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.400758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.400787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.401150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.401193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.401563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.401592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.401855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.401883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.402244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.402273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.402618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.402646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.403013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.403042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.403463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.403492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.403833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.403861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.404230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.404260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.404632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.404660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.405020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.413 [2024-11-26 19:20:12.405048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.413 qpair failed and we were unable to recover it. 00:29:55.413 [2024-11-26 19:20:12.405387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.405778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.405806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.406183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.406214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.406572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.406600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.406956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.406984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.407248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.407277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.407640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.407668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.408030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.408058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.408347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.408377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.408809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.408836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.409170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.409199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.409532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.409561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.409936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.409964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.410335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.410366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.410730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.410759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.411168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.411198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.411584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.411612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.411976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.412424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.412782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.412811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.413179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.413209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.413537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.413565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.413935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.413962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.414329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.414358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.414727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.415155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.415529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.415557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.415920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.415948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.416313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.416342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.416706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.416735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.417114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.417143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.414 [2024-11-26 19:20:12.417482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.414 [2024-11-26 19:20:12.417510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.414 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.417901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.418174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.418205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.418580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.418608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.418983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.419012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.419265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.419298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.419673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.419701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.420064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.420092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.420454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.420843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.420870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.421236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.421265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.421637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.421664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.422049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.422077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.422431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.422841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.422868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.423219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.423250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.423604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.423633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.423897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.423925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.424187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.424217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.424572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.424601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.424960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.424989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.425342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.425371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.425750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.425778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.426137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.426182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.426562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.426591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.426950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.426980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.427384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.415 [2024-11-26 19:20:12.427415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.415 qpair failed and we were unable to recover it. 00:29:55.415 [2024-11-26 19:20:12.427762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.427790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.428174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.428503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.428531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.428787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.428819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.429172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.429202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.429558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.429587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.429985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.430355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.430384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.430758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.430786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.431051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.431079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.431467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.431497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.431837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.431865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.432232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.432261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.432645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.432673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.433088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.433115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.433491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.433520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.433777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.433806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.434157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.434196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.434547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.434576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.434878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.434906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.435259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.435289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.435663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.435691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.436053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.436081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.436456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.436492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.436831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.436859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.437266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.437298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.437645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.437674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.438035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.438063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.416 qpair failed and we were unable to recover it. 00:29:55.416 [2024-11-26 19:20:12.438419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.416 [2024-11-26 19:20:12.438448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.438751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.438779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.439142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.439184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.439543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.439572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.439939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.439968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.440327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.440357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.440710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.440737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.441108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.441136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.441505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.441535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.441890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.441918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.442276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.442306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.442655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.442684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.443054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.443082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.443453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.443483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.443823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.443851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.444215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.444245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.444618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.445014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.445427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.445781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.445809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.446057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.446090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.446452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.446482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.447279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.447308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.447676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.447704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.448093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.448467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.448498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.448860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.448888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.449258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.417 [2024-11-26 19:20:12.449650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.417 [2024-11-26 19:20:12.449679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.417 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.450041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.450069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.450441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.450806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.450834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.451199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.451228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.451610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.451638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.451888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.451916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.452291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.452322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.452680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.452708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.453079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.453483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.453513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.453875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.453903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.454244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.454274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.454640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.454670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.455030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.455058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.455364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.455393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.455760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.455789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.456147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.456189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.456542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.456570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.457337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.457367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.457728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.457757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.458127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.458155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.458527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.458556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.458895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.458923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.459281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.459311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.459695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.459725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.460073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.460101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.460443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.460473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.418 [2024-11-26 19:20:12.460822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.418 [2024-11-26 19:20:12.460850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.418 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.461223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.461253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.461587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.461614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.461956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.461984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.462226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.462255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.462633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.462661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.463022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.463051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.463428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.463457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.463710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.463737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.464087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.464115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.464491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.464520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.464881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.464910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.465262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.465292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.465617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.465645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.466016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.466043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.466380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.466409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.466773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.466801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.467176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.467205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.467564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.467593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.467963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.467992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.468339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.468369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.468571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.468602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.468962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.468990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.469353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.469383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.469752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.469780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.470030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.470058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.470394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.470423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.470789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.471167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.471556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.471584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.419 [2024-11-26 19:20:12.471980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.419 qpair failed and we were unable to recover it. 00:29:55.419 [2024-11-26 19:20:12.472343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.472372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.472638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.472673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.473020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.473392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.473422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.473792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.473821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.474184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.474565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.474593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.474966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.474994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.475332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.475361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.475722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.475750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.476112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.476141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.476490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.476519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.476893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.476921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.477286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.477659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.477687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.478053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.478081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.478517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.478875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.478903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.479272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.479302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.479558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.479590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.479949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.479977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.480386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.480416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.480776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.480804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.481177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.481206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.481543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.481571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.481935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.482275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.482304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.482677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.482705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.483122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.483156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.483503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.483531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.483896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.483924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-11-26 19:20:12.484274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-11-26 19:20:12.484303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.484543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.484575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.485008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.485037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.485407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.485436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.485807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.485835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.486199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.486229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.486562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.486590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.486963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.486991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.487387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.487416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.487793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.487821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.488177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.488207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.488575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.488603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.488841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.488872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.489232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.489262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.489631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.489659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.490027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.490055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.490403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.490433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.490865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.490894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.491234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.491264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.491627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.492017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.492045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.492381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.492411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.492769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.492798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.493186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.493216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.493601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.493629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.493993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-11-26 19:20:12.494021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-11-26 19:20:12.494381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.494411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.494767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.494796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.495156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.495195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.495569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.495597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.495974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.496002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.496360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.496390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.496764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.496793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.497142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.497182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.497583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.497611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.497971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.497999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.498370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.498400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.498773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.498801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.499176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.499206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.499614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.499642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.500012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.500040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.500384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.500413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.500775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.500804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.501175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.501207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.501469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.501498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.501700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.501728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.502105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.502133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.502442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.502471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.502834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.502862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.503221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.503251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.503515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.503543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.503891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.503919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.504275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.504305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.504666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.504694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.505057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.505085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.505343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-11-26 19:20:12.505373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-11-26 19:20:12.505724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.505753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.506116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.506144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.506505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.506534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.506895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.506925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.507257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.507287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.507653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.507682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.508058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.508086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.508461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.508491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.508869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.508897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.509256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.509291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.509688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.509716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.510048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.510077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.510426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.510455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.510815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.510843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.511215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.511245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.511528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.511892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.511920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.512191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.512221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.512586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.512615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.512963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.512991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.513385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.513414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.513784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.513812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.514183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.514212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.514564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.514939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.514967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.515305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.515335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.515713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.515741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.516102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.516130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.516554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-11-26 19:20:12.516583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-11-26 19:20:12.516843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.516871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.517248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.517278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.517623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.517651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.518053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.518410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.518803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.518831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.519196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.519227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.519618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.519652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.520017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.520047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.520294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.520324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.520682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.520711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.521071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.521099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.521500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.521530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.521917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.522395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.522724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.523113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.523141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.523478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.523507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.523835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.523863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.524224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.524632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.524660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.525022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.525050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.525391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.525421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.525776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.525804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.526170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.526200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.526441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.526469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.526836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.526864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.527226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.527254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-11-26 19:20:12.527673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-11-26 19:20:12.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.528032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.528060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.528400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.528431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.528788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.528816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.529190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.529219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.529569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.529597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.529968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.530008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.530267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.530296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.530570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.530998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.531026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.531371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.531400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.531764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.531792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.532152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.532193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.532608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.532636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.532999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.533027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.533380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.533410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.533785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.533813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.534185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.534215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.534534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.534563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.534927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.534955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.535321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.535696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.535724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.536067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.536095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.536457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.536487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.536844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.536871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.537236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.537280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.537620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.538006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.538034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.538456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-11-26 19:20:12.538485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-11-26 19:20:12.538850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.538879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.539248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.539277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.539696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.540084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.540450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.540845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.540874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.541227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.541257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.541500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.541531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.541897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.541925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.542286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.542316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.542680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.542708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.542964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.542995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.543254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.543283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.543711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.544076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.544104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.544452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.544481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.544838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.544866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.545301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.545332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.545686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.545720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.546067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.546095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.546457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.546487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.546855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.546883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.547328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.547357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.547701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.547730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.548089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.548116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.548482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.548512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.548878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.548907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.549324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.549354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-11-26 19:20:12.549727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-11-26 19:20:12.549756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.550104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.550134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.550494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.550523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.550884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.550912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.551275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.551305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.551672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.551699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.552098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.552359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.552388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.552741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.552770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.553081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.553110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.553327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.553360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.553734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.553761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.554125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.554152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.554467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.554496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.554859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.554887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.555242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.555530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.555879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.555914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.556331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.556693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.556721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.557086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.557113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.557490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.557520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.557881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.557909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.558252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.558282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.558653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.558681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.558991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.427 [2024-11-26 19:20:12.559021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.427 qpair failed and we were unable to recover it. 00:29:55.427 [2024-11-26 19:20:12.559394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.559424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.559771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.559799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.560181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.560211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.560568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.560595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.560874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.560902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.561256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.561287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.561659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.561687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.562046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.562074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.562448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.562814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.562843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.563100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.563132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.563504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.563534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.563897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.564295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.564326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.564686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.564714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.565127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.565155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.565526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.565554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.565913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.565941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.566307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.566343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.566707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.566734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.567134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.567524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.567553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.567913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.567941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.568302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.568333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.568591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.568620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.568971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.568999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.569357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.569726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.569754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.570004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.570033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.570363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.570393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.570738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.570766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.428 qpair failed and we were unable to recover it. 00:29:55.428 [2024-11-26 19:20:12.571135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.428 [2024-11-26 19:20:12.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.571537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.571923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.571951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.572215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.572246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.572638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.573005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.573034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.573281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.573313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.573715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.573744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.574110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.574139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.574523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.574552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.574910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.574938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.575295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.575324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.575695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.575724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.576116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.576497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.576528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.576926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.576955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.577200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.577229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.577589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.577617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.577977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.578005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.578371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.578400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.578792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.579150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.579187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.579534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.579562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.579929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.580310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.580339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.580579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.580610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.580991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.581019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.581382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.581779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.582184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.429 [2024-11-26 19:20:12.582214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.429 qpair failed and we were unable to recover it. 00:29:55.429 [2024-11-26 19:20:12.582578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.582606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.583364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.583393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.583746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.583775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.584120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.584149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.584532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.584560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.584901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.584929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.585289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.585319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.585710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.585949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.585982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.586370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.586400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.586794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.587150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.587190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.587548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.587577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.587935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.587962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.588324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.588354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.588725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.588753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.589114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.589143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.589957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.590342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.590376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.590745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.590774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.591144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.591183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.591502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.591530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.591902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.591929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.592303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.592332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.592692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.592728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.593143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.593199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.593464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.593493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.593838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.593866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.594126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.430 [2024-11-26 19:20:12.594172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.430 qpair failed and we were unable to recover it. 00:29:55.430 [2024-11-26 19:20:12.594529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.594557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.594965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.594993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.595379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.595795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.596060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.596088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.596452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.596482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.596848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.596877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.597245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.597274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.597631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.597659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.598046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.598074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.598404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.598433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.431 [2024-11-26 19:20:12.598792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.431 [2024-11-26 19:20:12.598820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.431 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.599080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.599112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.599529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.599858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.599887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.600239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.600269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.600665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.600693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.601055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.601426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.601455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.601846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.602216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.602246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.602613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.602641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.603007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.603043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.603449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.603479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.603830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.603858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.604227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.604258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.604618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.605004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.605032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.605386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.605415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.605771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.605799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.606058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.606085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.606433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.606462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.606829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.606857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.607033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.607066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.607444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.607474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.607836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.607864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.608228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.608259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.608526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.608554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.608915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-11-26 19:20:12.608943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-11-26 19:20:12.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.609374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.609735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.609763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.610133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.610530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.610559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.610826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.610854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.611287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.611660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.611687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.612046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.612074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.612443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.612474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.612829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.612857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.613117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.613150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.613569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.613597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.613964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.613992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.614366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.614396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.614756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.614784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.615147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.615188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.615542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.615569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.616324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.616355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.616717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.616745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.617083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.617110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.617475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.617505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.617863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.618259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.618288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.618651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.618681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.618978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.619007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.619381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.619410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.619813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.619841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.620216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.620436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.620465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.620857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.620885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.621224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.621254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.621609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.621637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.622001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.622028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.622384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.622413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.622807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.623180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.623491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.623520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.623877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.623905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.624259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.624289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.624655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.624683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.625079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.625421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.625451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.625818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.625845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.626215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.626244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.626635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.627026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.627055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.627420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.627450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.627808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.627835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.628197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.628227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.628613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.628641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.629004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.629038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.629387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.629417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.629780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.629808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.630172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.630202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.630624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.630652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.630981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.631009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.631381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.631411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.631770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.631797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.632190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.632221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.632580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.632608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.632967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.632995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.633352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.633381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.633739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-11-26 19:20:12.633767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-11-26 19:20:12.634126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.634532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.634561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.634924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.634953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.635322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.635352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.635689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.635716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.636078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.636466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.636784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.636813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.637205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.637236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.637598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.637625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.637986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.638014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.638376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.638405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.638756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.638784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.639172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.639201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.639546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.639580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.639940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.639969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.640331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.640361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.640732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.640759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.641103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.641131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.641520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.641549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.641907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.642345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.642375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.642618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.642650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.643032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.643061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.643309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.643339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.643722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.643749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.644112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.644140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.644401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.644430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.644809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.644837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.645207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.645236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.645598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.645626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.645986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.646014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.646386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.646416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.646790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.646818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.647214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.647576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.647604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.647957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.647985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.648349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.648379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.648751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.648779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.649034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.649063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.649304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.649337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.649721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.649756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.650095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.650123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.650503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.650533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.650902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.650930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.651292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.651322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.651672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.651700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.652057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.652087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.652388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.652421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.652774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.652802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.653179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.653209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.653556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.653584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.653942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.653970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.654328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.654358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.654724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.654751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.655113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.655142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.655514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.655544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.656307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.656338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.656718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.656746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.657036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.657063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.657395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.657425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.657783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.657810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.658186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.658215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-11-26 19:20:12.658580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-11-26 19:20:12.658609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.658861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.658889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.659275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.659305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.659560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.659949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.660311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.660342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.660714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.660742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.661105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.661133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.661506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.661535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.661904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.661934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.662294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.662324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.662691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.662719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.663080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.663108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.663468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.663498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.663848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.663876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.664231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.664262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.664547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.664575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.664917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.664946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.665308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.665339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.665590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.665622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.665964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.665992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.666371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.666401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.666637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.666668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.667082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.667489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.667518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.667884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.667913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.668284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.668315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.668677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.668706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.669055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.669083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.669447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.669476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.669727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.669755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.670208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.670594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.670623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.670997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.671026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.671400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.671429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.671807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.671835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.672202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.672232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.672491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.672519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.672864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.672892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.673255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.673284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.673679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.674041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.674069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.674502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.674531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.674888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.674916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.675283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.675312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.675679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.675713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.676075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.676105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.676495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.676860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.676888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.677236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.677266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.677683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.677712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.678068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.678461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.678490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.678849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.678877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.679240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.679269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.679639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-11-26 19:20:12.679668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-11-26 19:20:12.680008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.680037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.680406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.680435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.680686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.680718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.680980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.681009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.681339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.681369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.681724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.681751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.682119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.682147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.682437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.682465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.682805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.682834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.683131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.683168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.683526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.683555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.683947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.684309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.684338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.684712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.684740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.685102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.685129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.685524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.685554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.685911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.685951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.686201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.686236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.686613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.686643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.687073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.687103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.687518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.687883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.687911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.688270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.688300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.688663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.688691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.689060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.689088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.689463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.689492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.689858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.689887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.690573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.690602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.690861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.690889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.691233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.691263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.691609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.691884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.691912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.692270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.692299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.692682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.692709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.693055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.693083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.693493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.693893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.693921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.694283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.694313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.694670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.694698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.695045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.695073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.695465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.695495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.695851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.695880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.696228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.696264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.696624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.696652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.697005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.697033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.697442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.697476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.697823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.697852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.698221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.698251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.698543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.698571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.698946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.698974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.699327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.699357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.699715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.699743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.700105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.700134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.700493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.700523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.700773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.700802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.701096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.701129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.701641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.701671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.701934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.701963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.702318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.702348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.702706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.702734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-11-26 19:20:12.703181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-11-26 19:20:12.703212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.703588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.703616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.704057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.704085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.704358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.704688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.704717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.705052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.705079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.705416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.705445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.705811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.705839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.706202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.706231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.706482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.706510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.706893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.707238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.707268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.707626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.707654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.708009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.708039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.708399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.708429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.708797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.708825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.709082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.709110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.709475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.709504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.709860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.709890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.710206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.710235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.710580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.710608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.710968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.710997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.711265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.711295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.711656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.711690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.712057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.712088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.712338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.712368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.712675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.712703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.713078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.713106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.713448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.713478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.713837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.713865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.714108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.714136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.714499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.714529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.714896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.714924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.715296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.715326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.715696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.715725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.715990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.716018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.716349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.716380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.716744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.716773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.717143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.717184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.717546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.717576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.717833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.717861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.718276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.718306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.718683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.718711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.719083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.719111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.719470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.719502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.719863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.719893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.720266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.720297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.720660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.720689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.721068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.721097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.721475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.721504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.721856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.721891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.722282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.722312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.722686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.723088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.723118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.723538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.723569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.723933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.723962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.724256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.724286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.724545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.724574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.724964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.724992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.725391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.725421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-11-26 19:20:12.725788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-11-26 19:20:12.725817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.726187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.726219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.726588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.726617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.726864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.726895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.727239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.727270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.727634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.727662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.728076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.728105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.728449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.728479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.728875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.728903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.729263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.729293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.729668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.729697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.730065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.730095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.730393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.730424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.730765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.730793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.731174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.731204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.731584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.731612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.731982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.732410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.732446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.732846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.732876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.733266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.733591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.733620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.733989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.734243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.734272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.734735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.734764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.735139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.735189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.735569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.735919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.735948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.736226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.736256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.736482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.736511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.736896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.736925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.737374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.737406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.737769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.737797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.737985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.738017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.738360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.738390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.738752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.738781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.739242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.739272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.739671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.739699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.740068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.740096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.740412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.740443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.740706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.740734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.741083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.741112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.741547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.741578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.741934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.741963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.742351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.742382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.742723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.742752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.743013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.743043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.743407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.743436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.743798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.743826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.744193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.744222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.744570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.744599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.744970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.744998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.745213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.745268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.745673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.745703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.746066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.746095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.746455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-11-26 19:20:12.746843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-11-26 19:20:12.746872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.747219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.747453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.747485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.747767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.747797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.748179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.748209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.748513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.748542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.748798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.748827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.749262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.749293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.749729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.749759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.750086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.750115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.750505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.750535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.750948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.750978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.751360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.751390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.751767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.751796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.752171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.752202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.752645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.752674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.753013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.753041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.753493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.753525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.753927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.753956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.754226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.754256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.754606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.754635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.754985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.755016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.755362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.755393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.755758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.755787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.756153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.756194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.756554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.756584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.756830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.756862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.757240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.757272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.757523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.757557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.757829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.757858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.758240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.758277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.758657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.758687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.759112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.759141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.759505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.759535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.759923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.759952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.760342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.760372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.760648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.760677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.760955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.760985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.761393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.761772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.761801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.762057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.762086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.762311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.762341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.762737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.762766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.763131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.763182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.763634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.763665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.764073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.764103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.764447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.764477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.764837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.764865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.765119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.765147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.765533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.765872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.765900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.766212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.766243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.766641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.766671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.767050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.767079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.767520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.767887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.767916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.768180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.768210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.768583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.768619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.768864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.768894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.769239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-11-26 19:20:12.769269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-11-26 19:20:12.769645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.769673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.770042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.770070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.770464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.770495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.770847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.770875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.771223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.771253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.771648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.771677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.771964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.771996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.772394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.772423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.772787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.772817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.773197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.773227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.773606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.773634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.774013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.774499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.774862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.774890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.775267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.775296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.775564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.775592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.775942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.775970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.776317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.776346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.776587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.776615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.776974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.777003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.777355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.777384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.777746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.777774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.778141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.778181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.778553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.778581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.778942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.778971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.779229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.779260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.779623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.779650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.780034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.780389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.780418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.780659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.780687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.780937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.780965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.781340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.781370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.781607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.781639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.781990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.782018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.782419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.782776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.782804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.783180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.783209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.783566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.783594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.783960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.783989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.784230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.784260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.784635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.784663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.785023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.785052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.785394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.785423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.785784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.785813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.786188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.786218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.786459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.786487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.786824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.786852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.787228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.787625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.787653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.788020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.788048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.788387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.788416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.788774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.788802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.789177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.789209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.789492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.789521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.789882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.789909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.790269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.790298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.790995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.791023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.791287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.791316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.791686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.791715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.792074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-11-26 19:20:12.792102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-11-26 19:20:12.792452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.792484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.792833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.792862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.793232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.793263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.793647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.793675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.794042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.794077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.794432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.794462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.794872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.795250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.795280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.795681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.795709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.795973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.796001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.796244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.796274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.796670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.796699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.797064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.797092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.797473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.797503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.797849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.797877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.798242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.798272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.798531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.798560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.798858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.798886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.799244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.799274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.799654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.799684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.800072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.800101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.800829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.800857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.801221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.801252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.801607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.801636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.802000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.802027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.802390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.802420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.802782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.802811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.803197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.803227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.803506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.803539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.803912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.803940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.804296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.804333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.804684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.804713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.805138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.805179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.805414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.805443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.805825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.805854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.806198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.806228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.806593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.806620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.806991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.807018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.807411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.807442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.807678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.807706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.808068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.808097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.808507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.808536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.808879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.808908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.809213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.809554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.809583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.809918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.809945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.810300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.810330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.810608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.810635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.810909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.810945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.811341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.811716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.811744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.811994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.812022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.812406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-11-26 19:20:12.812435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-11-26 19:20:12.812796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.812825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.813198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.813608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.813636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.814012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.814040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.814291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.814328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.814584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.814612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.814944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.814973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.815236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.815266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.815631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.815659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.816025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.816053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.816480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.816510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.816870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.816898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.817282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.817311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.817704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.817733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.818095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.818123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.818550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.818579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.818941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.818969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.819326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.819357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.819718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.819747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.820018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.820046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.820278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.820312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.820684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.820713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.821063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.821092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.821466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.821496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.821872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.821901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.822031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.822058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.822401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.822817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.822845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.823207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.823237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.823493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.823525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.823888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.823916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.824404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.824433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.824790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.824821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.825086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.825114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3132750 Killed "${NVMF_APP[@]}" "$@" 00:29:55.719 [2024-11-26 19:20:12.825414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.825799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:55.719 [2024-11-26 19:20:12.826184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.826215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:55.719 [2024-11-26 19:20:12.826610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.719 [2024-11-26 19:20:12.826897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.826926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.719 [2024-11-26 19:20:12.827194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.827226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.719 [2024-11-26 19:20:12.827539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.827569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.827915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.827946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.828315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.828352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.828642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.828671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.829050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.829080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.829430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.829460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.829799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.829828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.830181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.830211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.830587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.830615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.830978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.831014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.831390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.831419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.831774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.831803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.832043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.832071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.832326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.832359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.832732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-11-26 19:20:12.832761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-11-26 19:20:12.833103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.833131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.833548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.833578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.833926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.833954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.834304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.834333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.834721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.834750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.834981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.835009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.835403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.835433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.835816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3133597 00:29:55.720 [2024-11-26 19:20:12.836181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.836213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3133597 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.836588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.836618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3133597 ']' 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.720 [2024-11-26 19:20:12.836965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.836996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.720 [2024-11-26 19:20:12.837362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.837395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 wit 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.720 h addr=10.0.0.2, port=4420 00:29:55.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.720 [2024-11-26 19:20:12.837798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.837832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 19:20:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:55.720 [2024-11-26 19:20:12.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.838174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.838528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.838561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.838923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.838953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.839293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.839326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.839680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.839710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.840018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.840048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.840419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.840451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.840813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.840845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.841221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.841253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.841683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.841713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.842109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.842146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.842518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.842554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.842834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.842867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.843238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.843272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.843626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.843658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.844026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.844436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.844469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.844828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.844858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.845225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.845256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.845638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.845667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.846063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.846311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.846341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.846743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.846773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.847186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.847565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.847596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.848000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.848030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.848300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.848332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.848729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.848759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.849131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.849171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.849535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.849566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.849896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.849927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.850365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.850396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.850762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.850792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.851140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.851184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.851596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.851839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.851872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.852109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.852142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.852560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.853344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.853375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-11-26 19:20:12.853751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-11-26 19:20:12.853780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.854126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.854154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.854419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.854448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.854696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.854730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.855002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.855032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.855461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.855490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.855878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.855906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.856261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.856290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.856544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.856577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.856941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.856970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.857323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.857353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.857741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.858109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.858138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.858452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.858482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.858839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.858867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.859210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.859240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.859505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.859534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.859937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.859966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.860145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.860186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.860518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.860546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.860906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.860934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.861344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.861375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.861674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.861707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.862082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.862111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.862548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.862579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.862942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.862971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.863257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.863287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.863657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.863685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.863856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.863885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.864109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.864138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.864559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.864588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.864945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.864973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.865239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.865269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.865532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.865562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.865931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.865960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.866422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.866453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.866797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.866826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.867033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.867061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.867472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.867503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.867842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.867872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.868240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.868269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.868649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.868678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.869055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.869084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.869248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.869278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.869562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.869592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.870012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.870040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.870374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.870724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.870752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.871149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.871191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.871527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.871556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.871793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.871821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.872186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.872216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.872450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-11-26 19:20:12.872482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-11-26 19:20:12.872923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.872952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.873273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.873305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.873722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.873752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.874131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.874174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.874404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.874435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.874835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.875243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.875274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.875652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.875681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.876045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.876074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.876271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.876301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.876672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.876700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.876967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.876997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.877418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.877455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.877628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.877657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.877908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.877938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.878306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.878336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.878998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.879031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.879470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.879501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.879738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.879766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.880153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.880603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.880632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.880941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.880969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.881284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.881314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.881707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.881737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.882109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.882138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.882600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.882631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.883014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.883277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.883306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.883644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.883673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.884050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.884079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.884506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.884536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.884883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.884913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.885208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.885240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.885644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.885674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.886048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.886078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.886427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.886459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.886749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.886779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.887138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.887177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.887563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.887599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.887957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.887987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.888304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.888336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.888668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.888698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.889062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.889092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.889405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.889437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.889704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.889734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.889999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.890028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.890284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.890315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.890655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.890683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.891088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.891594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.891625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.891983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.892012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.892286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.892316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.892455] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:29:55.722 [2024-11-26 19:20:12.892529] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.722 [2024-11-26 19:20:12.892603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.892635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.892939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.892969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.893305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.893335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-11-26 19:20:12.893741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-11-26 19:20:12.894101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.894131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.894528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.894559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.894912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.894941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.895306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.895337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.895714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.895743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.896091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.896121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.896503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.896535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.896895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.896925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.897355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.897395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.897679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.897712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.898053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.898082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.898499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.898846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.898875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.899231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.899262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.899592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.899622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.899964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.899995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.900353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.900383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.900616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.900646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.900991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.901021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.901326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.901358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:55.723 [2024-11-26 19:20:12.901736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.723 [2024-11-26 19:20:12.901765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:55.723 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.902132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.902555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.902586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.902842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.903221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.903252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.903661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.903691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.904039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.904069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.904477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.904508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.904756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.904790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.905136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.905180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.905557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.905588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-11-26 19:20:12.905950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-11-26 19:20:12.905982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.906333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.906365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.906720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.906750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.907115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.907145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.907513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.907550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.907926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.908294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.908661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.908931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.908961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.909222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.909253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.909645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.909674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.909930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.909959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.910311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.910745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.911154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.911194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.911583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.911611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.912016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.912275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.912722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.912751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.913019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.913048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.913419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.913449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.913811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.913840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.914212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.914243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.914650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.914679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.915327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.915357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.915758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.915788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.916142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.916180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.916595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.917001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.917029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.917294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.917324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.917761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.917789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.918191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.918222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.918625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.918654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.919105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.919134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.919550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.919946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.919975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.920335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.920365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-11-26 19:20:12.920751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-11-26 19:20:12.920780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.921149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.921588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.921616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.921777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.921805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.922207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.922239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.922647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.922676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.922926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.922954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.923335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.923366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.923744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.923774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.923926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.923955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.924373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.924403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.924775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.924804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.925068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.925096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.925520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.925551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.925913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.925943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.926334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.926365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.926753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.926781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.927172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.927202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.927564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.927594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.927971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.928001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.928293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.928323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.928680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.928709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.929073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.929102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.929466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.929496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.929942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.929970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.930362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.930392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.930795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.930825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.931191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.931222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.931603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.931631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.931994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.932286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.932317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.932663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.932691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.932942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.932971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.933235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.933265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.933697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.933732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.934009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.934037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.934443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.934474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.934717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.934745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-11-26 19:20:12.935112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-11-26 19:20:12.935140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.935574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.935603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.935970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.935998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.936258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.936291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.936646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.936675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.937071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.937099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.937455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.937485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.937832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.937860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.938105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.938133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.938443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.938471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.938831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.938859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.939198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.939230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.939606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.939634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.939813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.939842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.940203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.940233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.940637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.940665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.940904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.940934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.941304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.941334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.941616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.941963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.941991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.942320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.942349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.942727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.942755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.943125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.943153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.943541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.943576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.943908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.943937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.944199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.944229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.944585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.944614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.944862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.944890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.945235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.945265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.945630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.945658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.945830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.945861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.946282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.946311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.946689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.946716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.946946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.946976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.947241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.947654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.947683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.948032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.948060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.948499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-11-26 19:20:12.948530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-11-26 19:20:12.948868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.948896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.949323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.949353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.949600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.949629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.949964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.949993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.950336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.950365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.950733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.950763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.951124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.951154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.951322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.951355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.951640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.951671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.952036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.952373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.952404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.952833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.952862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.953222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.953259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.953644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.953672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.953947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.953977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.954369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.954748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.954778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.955149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.955189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.955556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.955584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.955956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.955984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.956356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.956722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.956750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.957067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.957096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.957545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.957575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.957842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.957870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.958174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.958206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.958470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.958500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.958867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.958898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.959285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.959316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.959669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.959698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.960063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.960092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.960496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.960527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.960881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.961267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.961296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.961663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.962042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.962071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.962457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.962486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.962822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.962851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-11-26 19:20:12.963215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-11-26 19:20:12.963245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.963527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.963954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.963983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.964206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.964237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.964508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.964539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.964697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.964725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.964947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.964976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.965271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.965301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.965683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.965711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.965984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.966014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.966453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.966483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.966895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.966924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.967156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.967196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.967558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.967587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.967961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.967989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.968337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.968368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.968749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.968779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.969143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.969185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.969525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.969555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.969930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.969958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.970311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.970715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.970743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.971106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.971136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.971388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.971419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.971848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.971876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.972194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.972224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.972577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.972607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.972964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.972993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.973344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.973374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.973638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.973667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.974023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.974052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.974313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.974343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.974710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.974739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-11-26 19:20:12.975130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-11-26 19:20:12.975173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.976263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.976298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.976682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.976711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.977073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.977102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.977453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.977483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.977868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.978225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.978255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.978485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.978513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.978739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.978766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.979110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.979146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.979581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.979611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.979979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.980008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.980343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.980376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.980702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.980731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.981129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.981170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.981524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.981951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.981981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.982273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.982304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.982677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.982706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.983054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.983083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.983450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.983480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.983706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.983739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.984080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.984110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.984504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.984535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.984901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.984930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.985288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.985678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.985709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.986065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.986093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.986490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.986521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.986892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.986920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.987282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.987315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.987699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.987728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.988079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.988107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.988523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.988555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.989000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.989415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.989794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.989835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.990184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.990217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.990577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-11-26 19:20:12.990606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-11-26 19:20:12.990975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.991005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.991390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.991421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.991642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.991672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.992036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.992066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.992483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.992513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.992866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.992896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.993281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.993312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.993659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.993688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.994057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.994086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.994460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.994490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.994859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.994887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.995236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.995267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.995633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.995662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.996019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.996050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.996427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.996460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.996803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.996834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.997057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.997462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.997492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.997850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.997880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.998238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.998271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.998549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.998578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.999006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.999377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.999407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:12.999746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:12.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.000150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.000192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.000567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.000924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.000954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.001286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.007 [2024-11-26 19:20:13.001295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.001328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.001701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.001731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.002078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.002107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.002468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.002500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.002840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.002868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.003116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.003146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.003538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.003567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.003954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.003982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.004314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.004344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.004704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.004733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.005104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.005135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-11-26 19:20:13.005498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-11-26 19:20:13.005528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.005837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.005866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.006229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.006260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.006586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.006614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.006980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.007008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.007339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.007370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.007744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.007772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.008126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.008156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.008428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.008462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.008801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.008829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.009133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.009176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.009529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.009558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.009800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.010193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.010233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.010592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.010621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.010877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.010906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.011677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.011706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.011944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.011976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.012369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.012401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.012759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.012789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.013149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.013193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.013572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.013602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.013976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.014005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.014391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.014423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.014782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.014812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.015174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.015205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.015439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.015469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.015821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.015850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.016205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.016235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.016592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.016622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.016990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.017020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.017382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.017413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.017636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.017664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.018040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.018306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.018336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.018690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.018719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.019083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.019112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.019508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-11-26 19:20:13.019539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-11-26 19:20:13.019910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.019938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.020301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.020338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.020716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.020744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.021104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.021132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.021490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.021519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.021883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.021914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.022279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.022310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.022675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.022704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.023053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.023081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.023420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.023449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.023794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.023822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.024201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.024232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.024455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.024483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.024719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.024750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.025042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.025070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.025422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.025452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.025777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.026208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.026239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.026616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.026646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.027003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.027386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.027415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.027797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.027827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.028181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.028212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.028578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.028608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.028962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.028990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.029255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.029284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.029514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.029542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.029866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.029895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.030257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.030286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.030651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.030681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.031032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.031061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.031402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.031432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.031688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.031717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.032061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.032089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-11-26 19:20:13.032453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-11-26 19:20:13.032482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.032845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.032874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.033248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.033645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.033673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.033943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.034324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.034354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.034726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.034754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.035124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.035153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.035559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.035588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.035950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.035979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.036351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.036380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.036596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.036625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.036979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.037007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.037345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.037374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.037748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.037777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.038197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.038523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.038551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.038921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.038949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.039314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.039345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.039724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.039752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.039987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.040017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.040298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.040328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.040732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.040761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.041126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.041154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.041475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.041503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.041865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.041894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.042253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.042282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.042651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.042679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.043051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.043081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.043436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.043467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.043726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.043758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.044040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.044068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.044404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.044434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.044785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.045182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.045211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.045556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-11-26 19:20:13.045591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-11-26 19:20:13.045851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.045879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.046244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.046273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.046631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.046661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.047023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.047051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.047394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.047425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.047777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.047805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.048185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.048215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.048560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.048588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.048811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.048840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.049215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.049245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.049603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.049631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.049999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.050027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.050379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.050408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.050782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.050810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.051186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.051217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.051601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.051631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.051994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.052022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.052384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.052413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.052773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.052801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.053177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.053209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.053552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.053581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.053943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.053972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.054361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.054391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.054690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.054720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 [2024-11-26 19:20:13.054714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.054765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.011 [2024-11-26 19:20:13.054775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.011 [2024-11-26 19:20:13.054783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.011 [2024-11-26 19:20:13.054791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.011 [2024-11-26 19:20:13.055069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.055104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.055500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.055531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.055922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.056292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.056323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.056697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.056725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.056815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:56.011 [2024-11-26 19:20:13.057112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.057024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:56.011 [2024-11-26 19:20:13.057141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 [2024-11-26 19:20:13.057165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.057179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:56.011 [2024-11-26 19:20:13.057522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-11-26 19:20:13.057551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-11-26 19:20:13.057928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.057958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.058271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.058300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.058617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.058978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.059007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.059357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.059386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.059762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.059791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.060086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.060117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.060491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.060522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.060888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.060917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.061274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.061304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.061584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.061612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.061987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.062017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.062290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.062319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.062657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.062686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.063056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.063084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.063713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.063745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.064038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.064066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.064443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.064752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.064795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.065142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.065184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.065530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.065559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.065945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.066309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.066339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.066713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.066741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.067108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.067139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.067527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.067557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.067924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.067953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.068324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.068354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.068615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.068643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.069005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.069036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.069425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.069457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.069830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.069859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.070130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.070176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.070522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.070551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.070949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.071294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.071325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.071584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.071613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-11-26 19:20:13.071871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-11-26 19:20:13.071900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.072124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.072154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.072531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.072560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.072923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.072952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.073311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.073340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.073776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.073806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.074056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.074084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.074354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.074384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.074622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.074661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.075009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.075039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.075422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.075806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.075835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.076206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.076236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.076461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.076489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.076825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.076855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.077216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.077247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.077612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.077641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.077781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.078141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.078183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.078540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.078570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.078937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.078966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.079337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.079367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.079745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.080184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.080215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.080581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.080610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.080980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.081010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.081458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.081489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.081851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.081879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.082240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.082271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.082655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.082685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.083061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.083091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.083349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.083380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.083747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.083777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.084148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.084188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.084540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.084569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-11-26 19:20:13.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-11-26 19:20:13.084851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.085269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.085301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.085671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.085701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.086042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.086073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.086419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.086449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.086818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.086847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.087228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.087261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.087532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.087900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.087930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.088303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.088332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.088700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.088729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.089103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.089134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.089577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.089608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.089966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.089994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.090155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.090210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.090598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.090626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.090968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.090998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.091351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.091382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.091643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.091672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.091940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.091969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.092342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.092372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.092624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.092978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.093386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.093417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.093774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.093803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.094182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.094213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.094543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.094573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.094826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.094854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.095220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.095251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.095513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.095547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.095892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.095921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.096285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.096315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.096699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.096728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.097088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.097116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-11-26 19:20:13.097387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-11-26 19:20:13.097416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.097782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.097811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.098080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.098109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.098476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.098508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.098874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.098903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.099255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.099285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.099622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.099650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.100007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.100043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.100390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.100419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.100683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.100712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.100979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.101009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.101381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.101411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.101780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.101808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.102148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.102209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.102549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.102577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.102917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.102946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.103272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.103303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.103670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.103698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.104033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.104061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.104420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.104450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.104814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.104842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.105215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.105637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.105665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.106018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.106047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.106316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.106346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.106598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.106965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.107339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.107370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.107748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.107778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.107982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.108011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.108337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.108368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.108547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.108575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.108930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.108960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.109326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.109357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.109762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.109797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.110008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.110350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.110380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.110624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.110653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-11-26 19:20:13.111017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-11-26 19:20:13.111045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.111417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.111448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.111624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.111652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.112011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.112040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.112384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.112413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.112753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.112781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.113169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.113199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.113543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.113571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.113793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.113822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.114185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.114216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.114618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.114649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.114991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.115019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.115277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.115307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.115621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.115649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.116016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.116045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.116396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.116426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.116799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.116827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.117197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.117229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.117433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.117462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.117844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.118189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.118220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.118966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.118996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.119335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.119373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.119741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.119770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.120141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.120190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.120544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.120572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.120806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.120836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.121107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.121137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.121496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.121527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.121748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.121776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.122052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.122411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.122440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.122806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.122834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.123200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.123230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.123328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.123354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-11-26 19:20:13.123682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-11-26 19:20:13.123710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.124071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.124100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.124473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.124854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.124882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.125104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.125133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.125402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.125431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.125701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.125729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.125939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.125966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.126343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.126373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.126732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.126759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.127125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.127153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.127478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.127507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.127870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.127898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.128269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.128299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.128676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.128704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.129079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.129108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.129480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.129510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.129722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.129750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.130092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.130120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.130496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.130526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.130889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.130916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.131130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.131169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.131504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.131532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.131900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.131929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.132284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.132316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.132673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.132700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.133069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.133096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.133457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.133486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.133861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.133890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.134258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.134639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.134667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.135036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.135065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.135408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.135437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.135645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.135672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.136035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.136063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.136401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.136430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-11-26 19:20:13.136795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-11-26 19:20:13.137173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.137203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.137576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.137604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.137971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.137999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.138207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.138238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.138481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.138509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.138900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.138930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.139295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.139325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.139663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.139691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.140095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.140123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.140502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.140532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.140877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.140906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.141306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.141681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.141709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.141933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.141961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.142323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.142353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.142732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.142760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.143112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.143142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.143420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.143449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.143690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.143725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.144077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.144106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.144374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.144403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.144764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.144793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.145181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.145211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.145588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.145616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.145992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.146020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.146383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.146414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.146664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.146692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.147040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.147069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.147424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.147453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.147826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.147855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.148274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.148304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.148655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.148683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.149047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.149075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.149445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.149474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.149853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.149882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.150249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.150278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.150655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.150685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.151079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.151106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-11-26 19:20:13.151462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-11-26 19:20:13.151490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.151699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.151727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.152083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.152111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.152333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.152362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.152711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.152740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.153109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.153137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.153495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.153892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.154284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.154314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.154687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.154716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.154978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.155006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.155398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.155428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.155798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.155825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.156087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.156114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.156363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.156392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.156666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.157028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.157057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.157412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.157442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.157683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.157710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.158072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.158100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.158316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.158345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.158728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.158756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.159009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.159039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.159389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.159420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.159790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.159818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.160195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.160225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.160583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.160611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.160977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.161006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.161296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.161336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.161588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.161617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.161835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-11-26 19:20:13.161863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-11-26 19:20:13.162109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.162136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.162569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.162598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.162963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.162991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.163338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.163368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.163638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.163666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.163896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.163928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.164182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.164213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.164558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.164586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.164951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.164978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.165273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.165302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.165645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.165674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.165902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.165930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.166309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.166339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.166707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.166735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.167112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.167139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.167536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.167565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.167935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.167964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.168327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.168359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.168737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.168766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.169131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.169172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.169530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.169558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.169788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.169815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.170169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.170200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.170515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.170544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.170784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.170811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.170934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.170961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.171353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.171383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.171637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.171894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.171925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.172290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.172320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.172546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.172574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.172721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.172750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.172988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.173016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.173233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.173264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.173641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.173669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.173899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.173931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.174296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.174326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.174665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.174694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.020 qpair failed and we were unable to recover it. 00:29:56.020 [2024-11-26 19:20:13.175071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.020 [2024-11-26 19:20:13.175099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.175375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.175407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.175773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.175802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.176184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.176214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.176572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.176600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.177043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.177072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.177445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.177780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.177809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.178018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.178047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.178274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.178303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.178686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.178715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.179087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.179115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.179484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.179513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.179744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.179772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.180150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.180205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.180413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.180441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.180801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.180829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.181200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.181231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.181614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.181642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.182014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.182041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.182319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.182349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.182706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.182734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.183100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.183128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.183367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.183398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.183759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.183788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.184143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.184192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.184536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.184565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.184777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.184805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.185191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.185221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.185442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.185470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.185842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.185870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.186092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.186121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.186494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.186524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.186750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.186788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.187136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.187179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.187520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.187549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.187919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.187947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.188109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.188137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.021 [2024-11-26 19:20:13.188502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.021 [2024-11-26 19:20:13.188531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.021 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.188758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.188786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.189039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.189067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.189442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.189814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.189842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.190064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.190092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.190329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.190360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.190728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.190756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.191126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.191154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.191539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.191568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.191953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.191981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.192295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.192324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.192681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.192709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.192938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.192965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.193336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.193367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.022 qpair failed and we were unable to recover it. 00:29:56.022 [2024-11-26 19:20:13.193743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.022 [2024-11-26 19:20:13.193772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-11-26 19:20:13.194140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-11-26 19:20:13.194193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.194403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.194432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.194649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.194677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.194918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.194947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.195320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.195349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.195721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.195749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.196119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.196153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.196415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.196444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.196800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.196830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.197183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.197213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.197601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.197813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.197843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.198196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.198226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.198632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.198660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.199056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.199410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.199441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.199806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.199834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.200196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.200226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.200497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.200528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.200888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.200916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.201131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.201172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.201548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.201576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.201943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.201971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.202325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.202357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.202620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.203002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.203030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.203271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.203301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.203795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.204176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.204205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.204466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.204495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.204842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.204870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.205248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.205277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.205640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.205668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.205878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.206315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.206772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.206802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.207182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.207211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.207553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.207581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.207947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.207975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.208285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.208315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.208687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.208717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.209084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.209111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.209539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.209571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.209977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.210005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.210250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.210283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.210666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.210695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.211089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.211532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.211564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.211972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.212000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.212262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.212295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.212512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.212541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.212915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.212943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.213327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.213357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.213715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.213743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.214104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.214134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.214496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.214525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.214872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.214900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.215275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.215305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.215674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.215701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.216063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.216093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.216457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.216487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.216896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.216925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.217296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.217326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.217693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-11-26 19:20:13.217722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-11-26 19:20:13.218087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.218533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.218562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.218930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.218958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.219186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.219573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.219600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.219966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.219994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.220375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.220752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.220779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.221152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.221193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.221295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.221322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.221643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.221677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.222036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.222064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.222426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.222456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.222818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.222846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.223216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.223248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.223625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.223654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.223914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.223946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.224349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.224654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.224682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.224921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.224952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.225185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.225217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.225608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.225636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.225857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.225885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.226228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.226258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.226494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.226523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.226896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.226923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.227138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.227179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.227548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.227578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.227953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.227980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.228348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.228718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.228746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.229004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.229036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.229265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.229295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.229666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.229694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.230069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.230097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.230376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.230406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.230809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.230837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.231216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.231253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.231638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.231666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.232026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.232054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.232283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.232314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.232522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.232550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.232901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.232929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.233303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.233333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.233696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.233724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.234101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.234129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.234371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.234400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.234770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.234800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.235175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.235205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.235562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.235591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.235952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.235980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.236329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.236359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.236739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.236767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.237144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.237186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.237804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.237831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.238180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.238349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-11-26 19:20:13.238377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-11-26 19:20:13.238591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.238619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.238975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.239004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.239219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.239248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.239621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.239651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.240019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.240047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.240402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.240433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.240766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.240794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.241155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.241198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.241550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.241579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.241950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.241979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.242339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.242368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.242743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.242771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.243109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.243137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.243526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.243893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.243921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.244296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.244327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.244704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.244733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.244943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.244970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.245220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.245252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.245629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.245659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.245893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.245923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.246292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.246689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.247093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.247121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.247502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.247531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.247912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.247940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.248311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.248340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.248713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.248741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.249112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.249142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.249509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.249538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.249906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.249934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.250197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.250227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.250589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.250618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.251383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.251413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.251775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.251803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.252185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.252215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.252569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.252597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.252976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.253005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.253355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.253620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.253649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.253918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.253946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.254315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.254343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.254713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.254741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.254954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.254981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.255225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.255254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.255644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.255672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.255806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.255841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.256226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.256255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.256568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.256979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-11-26 19:20:13.257008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-11-26 19:20:13.257349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.257378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.257753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.257781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.258007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.258035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.258335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.258365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.258742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.258769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.259169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.259199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.259430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.259461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.259782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.260139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.260183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.260460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.260710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.261116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.261144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.261562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.261591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.261956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.261984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.262198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.262229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.262578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.262606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.262954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.262983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.263341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.263371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.263607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.263635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.263995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.264022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.264376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.264405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.264793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.264821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.265234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.265281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.265527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.265562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.265817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.266082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.266110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.266346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.266376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.266629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.266662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.266763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.266792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.267138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.267180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.267426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.267454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.267783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.267813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.268184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.268214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.268620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.268904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.268933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.269281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.269312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.269687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.269715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.269936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.269966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.270328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.270358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.270633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.270661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.271016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.271044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.271398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.271428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.271653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.271680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.272136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.272180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.272545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.272575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.272930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.273319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.273349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.273716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.273744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.274175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.274571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.274830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.274864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.275233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.275263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.275583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.275611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.275984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.276013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.276383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.276413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.276767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.276796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.277179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.277208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.277464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.277494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-11-26 19:20:13.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-11-26 19:20:13.277742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.277981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.278010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.278385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.278415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.278787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.278815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.279186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.279215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.279463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.279491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.279774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.279802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.280147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.280188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.280398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.280427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.280795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.280823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.281199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.281623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.281652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.282003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.282031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.282398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.282428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.282738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.282766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.283147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.283422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.283453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.283803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.283832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.284099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.284127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.284402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.284432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.284793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.284822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.285185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.285215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.285529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.285557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.285911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.285940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.286286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.286316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.286687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.286715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.286946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.286974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.287334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.287364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.287606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.287634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.288012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.288041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.288391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.288421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.288770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.288798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.289178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.289208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.289540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.289574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.289917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.289945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.290301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.290332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.290753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.290995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.291025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.291391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.291421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.291804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.291831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.292207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.292237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.292627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.292655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.293021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.293050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.293409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.293439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.293649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.293677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.294046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.294074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.294445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.294476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.294726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.294754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.295037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.295067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.295431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.295461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.295823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.295851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.296251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.296630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.297030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.297058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.297419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.297449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.297828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.297856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.298112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.298144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.298402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.298431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.298658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.298686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.299053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.299458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.299496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-11-26 19:20:13.299742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-11-26 19:20:13.299771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.300002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.300030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.300396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.300425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.300790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.301201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.301231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.301439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.301467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.301829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.301859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.302223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.302253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.302638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.302666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.302923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.302955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.303198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.303229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.303455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.303483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.303836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.303863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.304235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.304265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.304515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.304547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.304926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.304954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.305320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.305350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.305718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.305745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.306117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.306146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.306548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.306577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.306819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.306849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.307197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.307228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.307621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.307649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.308016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.308044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.308403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.308432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.308650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.308678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.309033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.309068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.309420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.309450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.309708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.309737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.310080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.310108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.310507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.310538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.310948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.311278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.311309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.311665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.311693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.311905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.311935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.312149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.312203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.312582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.312611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.312970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.312999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.313249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.313610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.313639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.314031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.314060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.314281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.314311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.314687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.314716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.315088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.315116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.315561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.315923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.315951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.316340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.316370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.316746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.316774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.317404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.317433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.317659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.317687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.318053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.318082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.318464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.318495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-11-26 19:20:13.318590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-11-26 19:20:13.318618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Read completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.300 Write completed with error (sct=0, sc=8) 00:29:56.300 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Write completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 Read completed with error (sct=0, sc=8) 00:29:56.301 starting I/O failed 00:29:56.301 [2024-11-26 19:20:13.319468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.301 [2024-11-26 19:20:13.319962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.320026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.320493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.320599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.321059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.321098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.321438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.321546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.321881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.321918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.322406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.322514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.322972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.323012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.323223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.323262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.323644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.323674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.324025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.324054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.324415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.324444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.324813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.324841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.325147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.325195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.325564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.325594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.325989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.326019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.326393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.326425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.326696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.326730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.327074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.327102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.327478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.327508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.327879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.327916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.328329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.328358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.328584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.328983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.329011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.329199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.329229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.329614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.329643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.330010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.330040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.330386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.330416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.330797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.330825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.331190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.331219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.331567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.331595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.331930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.331959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.332301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.332331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.332685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.332714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.332817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.332845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.333123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.333151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.333541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.333570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.333795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.333823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.334194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.334224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.334591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.334620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.335032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.335059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.335441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.335472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.335688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.335716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.335981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.336012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.336427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.336458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.336820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.336849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.337257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.337286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.337616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.337645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.338015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.338043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.338396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.338425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.338791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-11-26 19:20:13.338820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-11-26 19:20:13.339043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.339071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.339331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.339360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.339638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.339666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.340007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.340035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.340268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.340296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.340652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.340680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.340935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.340967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.341334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.341364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.341623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.341656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.342013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.342049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.342269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.342300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.342668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.342696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.342952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.342981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.343337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.343368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.343641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.343670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.344049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.344077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.344447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.344477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.344726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.344755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.345056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.345084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.345448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.345478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.345837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.345866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.346232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.346261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.346634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.346663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.347028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.347057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.347411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.347443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.347660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.347690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.347904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.347933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.348309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.348339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.348562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.348590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.348837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.348871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.349233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.349263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.349617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.349645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.349883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.349914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.350295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.350326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.350702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.350730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.350968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.350996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.351392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.351424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.351776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.351805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.352171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.352201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.352557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.352588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.352963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.353357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.353386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.353767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.353796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.354194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.354536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.354566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.354931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.354959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.355227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.355623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.355878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.355906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.356288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.356323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.356683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.356712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.357078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.357107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.357469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.357499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.357710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.357739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.358114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.358144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.358417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.358450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.358792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-11-26 19:20:13.358821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-11-26 19:20:13.359182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.359214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.359449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.359482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.359824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.359854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.360104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.360133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.360331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.360578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.360606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.360956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.360985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.361289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.361614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.361643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.361983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.362014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.362380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.362412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.362783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.362811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.363178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.363207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.363452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.363481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.363855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.363883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.364109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.364138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.364371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.364401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.364787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.364816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.365190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.365220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.365569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.365598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.365967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.365996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.366367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.366398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.366614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.366998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.367026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.367380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.367410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.367656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.367684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.368058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.368087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.368489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.368519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.368882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.368912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.369312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.369341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.369600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.369632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.369990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.370019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.370385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.370422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.370766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.370795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.371173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.371206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.371521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.371550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.371918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.371946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.372296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.372326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.372734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.372970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.372998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.373247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.373281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.373516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.373546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.373929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.373957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.374330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.374359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.374731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.374760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.375105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.375133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.375492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.375523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-11-26 19:20:13.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-11-26 19:20:13.375915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.376137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.376172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.376496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.376880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.376908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.377283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.377314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.377677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.377705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.378067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.378098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.378469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.378500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.378767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.379172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.379202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.379570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.379599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.379955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.379983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.380373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.380615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.380644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.380916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.381310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.381340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.381703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.382079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.382107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.382323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.382353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.382612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.382642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.382882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.382910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.383142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.383183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.383534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.383563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.383933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.383961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.384324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.384353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.384722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.384765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.385012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.385042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.385409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.385440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.385810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.385838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.386208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.386238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.386490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.386519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.386809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.386837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.387202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.387231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.387621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.387650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.387987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.388015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.388408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.388788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.388816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.389181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.389211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.389572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.390008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.390038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.390388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.390417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.390785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.390814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.391179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.391210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.391436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.391464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.391700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.391732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.392108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.392138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.392391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.392421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.392739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.392768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.393132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.393168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.393518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.393547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.393898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.393927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.394293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.394324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.394547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.394924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.394953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.395247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.395611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.395640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.396011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.396040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.396386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.396415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.396821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.396850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.397066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.397094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.397452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-11-26 19:20:13.397481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-11-26 19:20:13.397841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.397870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.398233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.398265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.398627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.398654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.399040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.399069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.399439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.399475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.399807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.399836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.400205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.400234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.400476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.400504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.400727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.400759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.401167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.401198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.401469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.401502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.401859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.401887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.402252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.402283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.402674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.403031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.403060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.403295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.403325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.403606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.403634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.404001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.404031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.404379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.404409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.404769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.404797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.405018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.405047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.405783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.405812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.406193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.406222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.406576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.406605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.406830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.406858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.407226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.407255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.407589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.407616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.407984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.408014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.408393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.408423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.408780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.408809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.409180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.409209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.409616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.409996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.410024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.410375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.410405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.410768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.410796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.411017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.411406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.411435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.411802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.411830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.412188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.412219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.412615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.412643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.413022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.413050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.413406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.413434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.413804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.413832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.414067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.414100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.414541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.414571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.414942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.414970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.415284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.415314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.415682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.415711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.416067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.416465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.416494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.416880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.417276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.417582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.417943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.417972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.418340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.418370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.418714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.418742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.419098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.419126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.419404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.419435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.419655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.419684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-11-26 19:20:13.420049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-11-26 19:20:13.420078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.420410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.420441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.420785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.420812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.421184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.421213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.421566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.421596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.421968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.421996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.422330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.422360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.422698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.422726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.422868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.422895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.423209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.423239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.423637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.423665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.424027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.424055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.424398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.424429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.424685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.424713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.425105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.425134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.425506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.425534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.425761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.425789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.426150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.426187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.426524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.426552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.426914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.426942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.427319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.427348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.427717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.427746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.427974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.428003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.428386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.428416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.428793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.428822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.429191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.429222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.429443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.429472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.429725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.429757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.430141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.430179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.430431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.430459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.430814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.430842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.431211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.431241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.431585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.431613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.431988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.432016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.432279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.432310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.432560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.432589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.432719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.432751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.433111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.433139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.433813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.433843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.434221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.434250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.434482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.434511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.434746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.434774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.435150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.435186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.435399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.435428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.435769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.435798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.436196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.436227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.436467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-11-26 19:20:13.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-11-26 19:20:13.436746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.436774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.437151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.437191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.437572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.437601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.437973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.438009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.438369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.438724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.438751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.439154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.439529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.439558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.439791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.439819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.440194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.440224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.440587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.440615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.440992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.441019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.441292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.441323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.441699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.441728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.441935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.441964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.442334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.442364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.442761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.442791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.443172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.443201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.443421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.443450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.443583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.443611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.443950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.443978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.444337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.444366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.444742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.444770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.445146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.445183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.445542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.445571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.445941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.446276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.446306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.446692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.446721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.447081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.447109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.447450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.447480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.447845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.447875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.448106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.448135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.448516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.448545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.448913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.448940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.449129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.449167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.449503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.449531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.449907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.449934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.450183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.450213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.450493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.450522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.450888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.450915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.451303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.451332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.451710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.451738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.452105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.452134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.452504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.452540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.452904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.452932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.453284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.453313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.453739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.453767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.454130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.454175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.454379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.454409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.454651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.454681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.455030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.455406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.455436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.455748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.455776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.456147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.456183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-11-26 19:20:13.456532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-11-26 19:20:13.456562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.456936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.456964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.457341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.457370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.457745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.457773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.458113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.458142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.458493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.458787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.458815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.458915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.458945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.459284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.459315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.459680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.459709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.459901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.459929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.460248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.460277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.460631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.460659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.461026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.461054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.461413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.461442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.461819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.461848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.462059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.462088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.462337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.462372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.462732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.462760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.463132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.463169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.463379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.463408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.463765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.463793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.464048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.464080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.464317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.464348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.464710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.464738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.465129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.465526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.465555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.465921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.465951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.466195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.466612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.466648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.467022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.467050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.467417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.467765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.467794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.468155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.468197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.468521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.468550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.468773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.468801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.469152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.469193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.469558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.469586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.469945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.469973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.470193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.470223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.470555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.470583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.470959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.470989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.471373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.471403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.471638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.471671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.472058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.472086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.472297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.472526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.472554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.472976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.473005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.473216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.473269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.473636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.473665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.474067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.474416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.474446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.474669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.474697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.475062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.475090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.475467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.475497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-11-26 19:20:13.475859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-11-26 19:20:13.475888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.476301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.476331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.476699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.476726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.476825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.476852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd608000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.477306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.477418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.477829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.477867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.478376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.478773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.478802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.479114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.479143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.479682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.479789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.480239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.480310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.480691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.480721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.481062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.481090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.481346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.481376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.481608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.481651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.481886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.481915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.482321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.482609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.482955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.482983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.483334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.483366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.483603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.483631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.483982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.484010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.484278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.484309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.484687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.484715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.484984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.485013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.485387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.485417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.485630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.485658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.486020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.486049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.486485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.486838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.486866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.487232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.487263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.487594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.487623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.487967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.487997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.488333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.488752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.488780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.489170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.489200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.489562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.489592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.489962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.489993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.490340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.490744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.490772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.490998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.491028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.491281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.491310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.491701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.491730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.491972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.492006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-11-26 19:20:13.492387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-11-26 19:20:13.492418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.492843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.492874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.493220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.493251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.493479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.493507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.493848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.493876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.494248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.494279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.494524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.494554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.494935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.494964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.495333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.495572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.495601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.495960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.495988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.496338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.496370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.496595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.496623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.496983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.497012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.497107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.497552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.497580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.497706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.497739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.498097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.498127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.498526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.498556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.498918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-11-26 19:20:13.498946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-11-26 19:20:13.499314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.499720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.499749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.500113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.500142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.500439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.500468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.500692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.500721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.500993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.501228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.501258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.501616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.501644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.502024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.502053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.502410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.502440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.502805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.502833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.503225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.503255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.503591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.503620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.503981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.504010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.504252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.504283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.504640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.504668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.505032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.505060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.505398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.505428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.505835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.506202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.506231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.506601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.506630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.506998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.507028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.507413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.507654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.507683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.508044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.508072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.508418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.508447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.508675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.508707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.509062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.509090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.509431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.509462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.509827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.509854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.510231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-11-26 19:20:13.510261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-11-26 19:20:13.510680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.510709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.510962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.510996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.511273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.511304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.511522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.511551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.511915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.511943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.512329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.512358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.512724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.512752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.512980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.513009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.513297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.513326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.513779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.513806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.514177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.514207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.514576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.514603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.514976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.515003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.515395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.515425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.515737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.516130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.516202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.516549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.516579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.516923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.516953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.517321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.517352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.517579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.517609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.517863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.517894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.518116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.518144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.518555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.518585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.518805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.518835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.519220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.519251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.519544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.519573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.519920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.519951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.520320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.520739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.520770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.521115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.521143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.521376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.521405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.521764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.521792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.522054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.522082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.522456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.522488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.522855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.522884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-11-26 19:20:13.523256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-11-26 19:20:13.523285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.523488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.523518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.523928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.523956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.524297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.524327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.524711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.524739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.525122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.525151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.525379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.525414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.525783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.525811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.526059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.526092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.526488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.526700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.526729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.527101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.527128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.527501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.527531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.527765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.527793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.528003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.528030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.528369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.528398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.528782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.528810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.529172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.529201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.529589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.529618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.529843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.530177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.530394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.530423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.530708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.530736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.531081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.531110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.531465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.531496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.531808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.531835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.532204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.532560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.532588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.532974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.533002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.533390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.533419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.533815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.533845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.534218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.534247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.534600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.534628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.534990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.535020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.535235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.535268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.535403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.535429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.535753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.535781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.536180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.536209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-11-26 19:20:13.536571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-11-26 19:20:13.536600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.536973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.537001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.537219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.537249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.537618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.537646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.538035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.538451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.538482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.538695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.538723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.539094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.539123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.539405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.539439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.539796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.539833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.540178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.540208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.540583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.540945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.540974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.541191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.541222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.541618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.541647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.541919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.541948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.542227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.542626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.543003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.543032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.543252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.543646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.543675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.544078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.544106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.544325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.544354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.544713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.544742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.545114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.545141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.545494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.545524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.545865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.545893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.546257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.546287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.546510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.546539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.546926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.547137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.547520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.547547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.547909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.547937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.548308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.548338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.548736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.549106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.549135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.549553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.549588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.549943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.549973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-11-26 19:20:13.550347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-11-26 19:20:13.550378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.550739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.550768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.550992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.551021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.551262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.551296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.551666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.551695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.552061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.552090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.552426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.552455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.552802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.552830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.553187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.553216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.553572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.553600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.553979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.554008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.554384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.554415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.554767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.554796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.555173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.555203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.555605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.555633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.555787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.555814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.556177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.556207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.556597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.556968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.556996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.557207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.557630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.557657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.557883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.557911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.558288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.558318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.558545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.558575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.558932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.558961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.559326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.559362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.559736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.560100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.560128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.560497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.560526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.560888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.561275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.561304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.561669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.561698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-11-26 19:20:13.562067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-11-26 19:20:13.562096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.562502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.562866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.563157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.563198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.563559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.563925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.563953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.564181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.564211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.564494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.564523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.564757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.564784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.565172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.565202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.565347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.565375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.565746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.565774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.566036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.566068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.566411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.566440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.566773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.566801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.567024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.567052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.567298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.567328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.567704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.567732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.568102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.568130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.568496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.568527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.568901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.569151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.569193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.569496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.569525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.569908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.569937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.570309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.570669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.570697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.570917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.570947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.571324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.571354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.571727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.572136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.572173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.572515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.572545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.572926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.572954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.573190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.573219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.573575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.573603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.573921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.573950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.574321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.574350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.574727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.574985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.575012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.575342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-11-26 19:20:13.575371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-11-26 19:20:13.575737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.575765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.576130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.576168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.576531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.576560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.576920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.577351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.577380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.577761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.577789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.578056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.578084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.578425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.578456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.578823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.578850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.579223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.579253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.579472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.579500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.579852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.579879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.580259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.580650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.580678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.581044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.581072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.581453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.581484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.581837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.581864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.582213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.582242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.582620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.582648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.583010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.583038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.583287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.583316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.583541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.583573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.583919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.583954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.584190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.584222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.584593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.584622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.584986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.585013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.585381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.585410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.585775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.585804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.586209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.586479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.586511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.586850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.586878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.587141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.587184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.587535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.587919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.587947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.588319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.588349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.588700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.588731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.589094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.589125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.589384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.589414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-11-26 19:20:13.590025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-11-26 19:20:13.590066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.590412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.590448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.590665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.590695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.591075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.591104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.591474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.591504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.591862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.591891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.592144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.592189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.592562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.592591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.592830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.592859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.593070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.593099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.593472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.593502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.593866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.593902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.594301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.594655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.594683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.595046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.595073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.595420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.595449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.595679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.595707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.596063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.596093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.596468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.596499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.596846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.596874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.597224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.597560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.597589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.597977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.598005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.598233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.598263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.598496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.598524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.598900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.598929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.599278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.599307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.599570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.599600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.599944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.599973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.600338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.600368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.600709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.600737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.600937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.600966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.601314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.601343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.601715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.601743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.602125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.602153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.602544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.602575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.602796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.602824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.603038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.603066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.603432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-11-26 19:20:13.603463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-11-26 19:20:13.603820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.603850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.604255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.604284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.604677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.605039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.605453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.605482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.605842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.605870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.606111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.606139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.606383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.606412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.606734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.606761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.607086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.607114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.607364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.607397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.607774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.608032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.608064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.608301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.608726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.608755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.609127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.609156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.609526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.609556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.609940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.609970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.610193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.610226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.610446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.610475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.610704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.610736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.610979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.611008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.611387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.611417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.611687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.611717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.612002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.612031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.612368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.612398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.612492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.612519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.612886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.612915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.613281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.613310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.613699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.613728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.613956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.613985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.614339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.614370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.614737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-11-26 19:20:13.614765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-11-26 19:20:13.614998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.615027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.615315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.615347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.615722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.615750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.616123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.616152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.616570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.616934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.617328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.617358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.617738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.617774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.618009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.618037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.618355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.618385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.618750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.618781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.619128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.619156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.619380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.619409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.619792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.619822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.620177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.620207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.620597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.620625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.620996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.621024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.621376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.621405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.621778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.621806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.622021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.622050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.622414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.622443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.622778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.622808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.623168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.623198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.623565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.623593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.623695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.623723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.624100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.624128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.624475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.624504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.624885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.624913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.625269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.625299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.625671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.625700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.626078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.626108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.626569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.626601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.626946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.626976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.627349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.627378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.627791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.627999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-11-26 19:20:13.628028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-11-26 19:20:13.628275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.628305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.628516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.628544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.628833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.628862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.629223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.629253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.629879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.629908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.630125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.630154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.630559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.630588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.630804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.630833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.631191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.631224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.631539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.631567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.631929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.631957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.632317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.632347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.632723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.632751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.633149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.633542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.633572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.633918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.633947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.634195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.634226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.634503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.634534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.634753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.634782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.635155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.635196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.635596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.635991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.636019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.636369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.636400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.636770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.636800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.637176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.637213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.637607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.637822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.637850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.638151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.638544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.638574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.638720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.638749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.639115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.639144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.639524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.639554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.639917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.640302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.640333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.640596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.640624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.640964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.640992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.641388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.641418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-11-26 19:20:13.641765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-11-26 19:20:13.641794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.642175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.642206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.642416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.642446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.642815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.642844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.643218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.643247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.643509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.643539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.643890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.643919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.644289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.644319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.644691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.644721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.645085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.645115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.645375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.645405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.645767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.646152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.646191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.646569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.646598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.646958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.646987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.647388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.647418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.647517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.647544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.647885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.647912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.648288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.648317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.648691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.648719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.649101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.649130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.649512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.649856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.649884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.650250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.650282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.650644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.650672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.650877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.650906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.651268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.651298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.651673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.651701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.651960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.651990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.652359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.652390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.652751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.652780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.653142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.653179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.653547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.653575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.653955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.653984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.654347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.654377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.654726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.654754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.655121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.655150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.655337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.655367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-11-26 19:20:13.655728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-11-26 19:20:13.655756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.656129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.656177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.656525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.656554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.656916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.656945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.657338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.657373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.657653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.657682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.658048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.658077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.658411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.658441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.658834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.659236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.659267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.659612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.659641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.659960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.659989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.660382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.660412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.660781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.660810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.661181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.661213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.661437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.661467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.661828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.661856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.662103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.662150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.662519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.662551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.662758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.663157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.663200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.663569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.663598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.663959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.663987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.664203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.664234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.664600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.664629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.664898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.664927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.665320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.665350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.665734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.665762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.666127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.666155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.666412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.666441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.666674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.666703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.666949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.666978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.667333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.667363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.667633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.667662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.668017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.668047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.668419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.668451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.668818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.668846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.669112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.669144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.669510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-11-26 19:20:13.669539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-11-26 19:20:13.669911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.669940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.670203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.670234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.670510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.670540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.670892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.670920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.671281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.671311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.671533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.671569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.671912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.671941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.672317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.672348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.672689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.672717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.673092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.673122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.673488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.673520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.673741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.673774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.674121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.674151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.674388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.674419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.674799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.674828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.675207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.675236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.675578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.675607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.675975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.676004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.676357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.676388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.676609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.676638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.676955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.676983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.677328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.677358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.677749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.677779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.678011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.678044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.678287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.678317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.678680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.678709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.679064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.679093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.679468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.679497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.679862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.679891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.680262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.680293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.680632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.680661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.681025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.681055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.681359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.681388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.681737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-11-26 19:20:13.681765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-11-26 19:20:13.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.682167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.682413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.682792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.682819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.683191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.683220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.683598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.683626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.683974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.684001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.684391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.684421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.684787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.684816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.685173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.685203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.685564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.685593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.685931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.685959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.686323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.686352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.686566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.686595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.686956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.686983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.687366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.687395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.687761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.687789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.688124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.688151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.688562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.688592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.688817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.688846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.689226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.689255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.689650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.689678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.690064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.690092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.690480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.690509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.690864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.690893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.691118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.691147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.691423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.691456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.691792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.691822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.692197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.692226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.692593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.692622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.693032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.693272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.693306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.693608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.693979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.694008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.694397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.694785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.694812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.695201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.695230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.695599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.695627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.695923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-11-26 19:20:13.695951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-11-26 19:20:13.696325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.696722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.696758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.697013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.697042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.697427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.697457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.697818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.697846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.698216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.698245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.698711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.699097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.699124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.699367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.699396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.699716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.700081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.700109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.700485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.700515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.700885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.700913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.701146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.701190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.701558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.701586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.701965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.701994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.702229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.702259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.702629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.702658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.703038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.703067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.703449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.703478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.703807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.703834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.704176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.704205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.704556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.704586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.704952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.704979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.705362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.705392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.705742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.705770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.706139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.706177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.706519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.706547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.706760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.706798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.707171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.707203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.707434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.707463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.707793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.707821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.708043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.708070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.708387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.708417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.708653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.708682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.709058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.709086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.709456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.709486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.709834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.709862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-11-26 19:20:13.710241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-11-26 19:20:13.710270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.710657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.710685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.711054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.711081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.711449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.711478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.711846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.711875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.712237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.712268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.712626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.712655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.713009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.713036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.713412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.713827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.713855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.714127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.714155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.714586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.714614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.714990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.715276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.715305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.715673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.715700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.716078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.716107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.716498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.716530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.716780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.716815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.717059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.717092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.599 [2024-11-26 19:20:13.717494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.717527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:56.599 [2024-11-26 19:20:13.717868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.717899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.599 [2024-11-26 19:20:13.718137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.718177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.599 [2024-11-26 19:20:13.718553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.718585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.599 [2024-11-26 19:20:13.718929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.718958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.719347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.719730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.719760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.720132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.720575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.720604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.720970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.720998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.721398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.721432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.721677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.721706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.722053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.722083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.722320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.722349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.722733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.722763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.723122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.723152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.723539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.723569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-11-26 19:20:13.723795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-11-26 19:20:13.723823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.724205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.724235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.724619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.724647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.725063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.725091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.725447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.725479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.725841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.725870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.726243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.726280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.726681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.726712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.726973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.727006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.727236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.727268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.727630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.727659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.728012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.728041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.728403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.728433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.728787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.728817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.729181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.729210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.729440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.729468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.729694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.729723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.730099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.730425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.730453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.730840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.730869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.731230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.731539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.731567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.731854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.731887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.732189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.732219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.732598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.732629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.732999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.733026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.733295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.733325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.733683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.733710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.734073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.734354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.734386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.734725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.734754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.735133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.735170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.735522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.735924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.735953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.736316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.736347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.736674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.736702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.736923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.736954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.737292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.737323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-11-26 19:20:13.737702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-11-26 19:20:13.737730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.738110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.738139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.738507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.738537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.738938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.739313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.739344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.739713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.739744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.740107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.740136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.740534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.740563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.740958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.741197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.741609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.741637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.742003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.742381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.742412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.742621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.742650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.743002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.743031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.743415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.743779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.743806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.744038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.744068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.744436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.744465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.744859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-11-26 19:20:13.745230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-11-26 19:20:13.745260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.745602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.745630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.745857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.745885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.746271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.746300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.746503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.746532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.746780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.746813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.747533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.747562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.747929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.747958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.748259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.748289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.748388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.748416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.748781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.749195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.749226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.749475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.749503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.749886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.749916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.750126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.750154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.750385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.750422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.750759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.750789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.751156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.751192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.751425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.751454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.751822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.751852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.752076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.752104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.752465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.752496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.752868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.752897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.753257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.753288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-11-26 19:20:13.753679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-11-26 19:20:13.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.754018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.754047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.754310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.754343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.754724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.754755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.755014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.755388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.755418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.755632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.755662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.756015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.756047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.756395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.756425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.756790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.756819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.757077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.757104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.757330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.757361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.757614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.757643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.758028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.758057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.758272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.758303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.758680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.758711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.759062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.759089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.759468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.759499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.759861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.759896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.760247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.760276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.603 [2024-11-26 19:20:13.760605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.760636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.760859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.603 [2024-11-26 19:20:13.760889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.761183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.603 [2024-11-26 19:20:13.761217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.761432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-11-26 19:20:13.761463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.603 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-11-26 19:20:13.761814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.761843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.762028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.762057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.762364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.762751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.762779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.763149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.763188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.763536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.763565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.763800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.763832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.764042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.764070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.764445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.764474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.764842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.764870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.765235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.765264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.765606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.765634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.766013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.766042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.766387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.766417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.766783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.766811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.767138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.767175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.767535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.767563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.767830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.767858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.768035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.768063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.768452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.768481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.768897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.768925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.769295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.769325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.769692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.769720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.770089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-11-26 19:20:13.770117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-11-26 19:20:13.770488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.770518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.770894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.770922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.771283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.771314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.771672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.771700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.771943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.771971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.772323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.772352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.772734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.773136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.773171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.773401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.773429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.773690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.773718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.774191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.774222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.774450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.774478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.774847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.774875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.775131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.775173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.775581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.775610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.775966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.775994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.776269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.776299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.776655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.776684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.777063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.777091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.777462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.777492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.777865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.777893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.778266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.778295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.778531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.778558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.778911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.778940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.779304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.779333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-11-26 19:20:13.779689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-11-26 19:20:13.779717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.606 [2024-11-26 19:20:13.780089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.606 [2024-11-26 19:20:13.780118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.606 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.780380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.780412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.780759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.780789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.781015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.781044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.781379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.781409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.781783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.781811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.782191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.782221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.782570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.782598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.782946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.782975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.783357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.783386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.783739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.783773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.784139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.784188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.784546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.784574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.784928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.784956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.785358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.785388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.785755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.785784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.786151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.786189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.786555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.786584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.786846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.786874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.787253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.787282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.787653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.787681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.787944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.787971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.788197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.788510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.788538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.788909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.788938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.789315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.789345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.789687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.789717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.789952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.790295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.790324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.790553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.790581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.790863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.790891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.791240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.791270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.791635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.791664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.791900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.791928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.792198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.872 [2024-11-26 19:20:13.792572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.872 [2024-11-26 19:20:13.792600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.872 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.792979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.793379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.793415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.793639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.793668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.794038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.794067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.794419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.794447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.794821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.794849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.795229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.795260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.795628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.795656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.795913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.795945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.796304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.796333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.796737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.796765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.796994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.797022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 Malloc0 00:29:56.873 [2024-11-26 19:20:13.797374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.797404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.797786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.797813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.873 [2024-11-26 19:20:13.798185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.798222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:56.873 [2024-11-26 19:20:13.798577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.798608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.873 [2024-11-26 19:20:13.798980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.799009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.873 [2024-11-26 19:20:13.799356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.799386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.799666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.799694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.799923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.799952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.800239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.800268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.800639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.800667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.800970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.800997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.801287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.801317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.801477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.801504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.801873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.801901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.802235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.802264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.802679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.803037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.803066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.803419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.803449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.803817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.803846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.804255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.804285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.804543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.873 [2024-11-26 19:20:13.804650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.804682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.804902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.804931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.805192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.805221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.873 [2024-11-26 19:20:13.805583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.873 [2024-11-26 19:20:13.805610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.873 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.805984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.806012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.806411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.806439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.806814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.806842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.807212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.807243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.807602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.807630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.808013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.808042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.808408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.808438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.808819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.808847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.809183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.809213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.809609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.809637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.810006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.810034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.810382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.810413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.810653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.810680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.810989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.811017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.811413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.811443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.811709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.811737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.812114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.812142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.812524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.812560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.812849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.812878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.813282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.813313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.874 [2024-11-26 19:20:13.813663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.813693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.874 [2024-11-26 19:20:13.814063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.814091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.874 [2024-11-26 19:20:13.814474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.814504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.874 [2024-11-26 19:20:13.814865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.814894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.815297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.815326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.815671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.815699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.815935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.816298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.816327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.816576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.816605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.817034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.817063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.817416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.817447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.817814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.817842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.818223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.818252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.818608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.818637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.819009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.819036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.819430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.819460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.874 qpair failed and we were unable to recover it. 00:29:56.874 [2024-11-26 19:20:13.819826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.874 [2024-11-26 19:20:13.819855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.820088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.820117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.820527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.820560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.820790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.820820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.821218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.821249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.821594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.821623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.821972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.822002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.822356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.822641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.822671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.823038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.823067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.823450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.823829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.823857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.824225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.824255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.824599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.824627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.825001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.825031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.825395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.825425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.875 [2024-11-26 19:20:13.825799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.825828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.875 [2024-11-26 19:20:13.826190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.826220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.875 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.875 [2024-11-26 19:20:13.826614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.826650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.827014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.827043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.827269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.827654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.827683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.827979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.828008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.828388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.828417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.828791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.828819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.829055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.829084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.829574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.829932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.829961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.830200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.830231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.830546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.830574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.830895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.830923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.831330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.831360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.831734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.831763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.832141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.832570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.832600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.832966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.832994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.833386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.875 [2024-11-26 19:20:13.833416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.875 qpair failed and we were unable to recover it. 00:29:56.875 [2024-11-26 19:20:13.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.833698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.834050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.834078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.834463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.834493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.834856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.834885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.835255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.835284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.835538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.835566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.835848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.835876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.836102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.836131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.836401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.836441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.836790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.836820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.837069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.837098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.837325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.837355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.837495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.837525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.876 [2024-11-26 19:20:13.837774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.837805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.838026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.838056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.876 [2024-11-26 19:20:13.838416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.838448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.876 [2024-11-26 19:20:13.838743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.838775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.839127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.839156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.839511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.839541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.839800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.839830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.840192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.840225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.840608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.840638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.840857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.840887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.841259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.841290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.841535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.841565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.841980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.842010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.842354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.842385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.842761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.842791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.842965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.842994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.843226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-11-26 19:20:13.843257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-11-26 19:20:13.843625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-26 19:20:13.843654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.844025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-26 19:20:13.844055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.844417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-26 19:20:13.844448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.844796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-11-26 19:20:13.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13780c0 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.844960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.877 [2024-11-26 19:20:13.855903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.856033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.856074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.856092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.856107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.856148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.877 19:20:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3132901 00:29:56.877 [2024-11-26 19:20:13.865597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.865685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.865711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.865722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.865733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.865756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.875690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.875791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.875816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.875827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.875837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.875860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.885704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.885782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.885808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.885815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.885822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.885840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.895693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.895774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.895794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.895801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.895807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.895826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.905637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.905697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.905715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.905723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.905730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.905747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.915706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.915777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.915796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.915805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.915812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.915829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.925734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.925812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.925832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.925840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.925852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.925870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.935821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.935900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.935922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.935930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.935942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.935960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.945798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.945871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.945909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.945919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.945926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.945950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-11-26 19:20:13.955850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.877 [2024-11-26 19:20:13.955932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.877 [2024-11-26 19:20:13.955969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.877 [2024-11-26 19:20:13.955979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.877 [2024-11-26 19:20:13.955987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.877 [2024-11-26 19:20:13.956011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:13.965852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:13.965923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:13.965945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:13.965953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:13.965960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:13.965978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:13.975783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:13.975855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:13.975878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:13.975886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:13.975893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:13.975913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:13.985914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:13.986021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:13.986041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:13.986049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:13.986055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:13.986073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:13.995837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:13.995930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:13.995949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:13.995957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:13.995963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:13.995980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.005989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.006060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.006079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.006086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.006093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.006110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.016040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.016120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.016144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.016152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.016162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.016181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.026000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.026064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.026083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.026090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.026097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.026114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.036042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.036107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.036125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.036133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.036139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.036156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.046074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.046180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.046200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.046208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.046214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.046231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.056125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.056211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.056227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.056241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.056248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.056264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.066274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.066347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.066365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.066373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.066379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.066395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-11-26 19:20:14.076091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.878 [2024-11-26 19:20:14.076151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.878 [2024-11-26 19:20:14.076176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.878 [2024-11-26 19:20:14.076184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.878 [2024-11-26 19:20:14.076190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:56.878 [2024-11-26 19:20:14.076207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.878 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.086234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.086303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.086320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.086328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.086334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.086351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.096375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.096467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.096485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.096492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.096500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.096516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.106227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.106297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.106314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.106322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.106328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.106346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.116283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.116382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.116400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.116408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.116415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.116432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.126262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.126332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.126350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.126357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.126363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.126381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.136299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.136367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.136384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.136391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.136397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.136414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.146280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.146362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.146395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.146403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.146410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.146428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.156401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.156462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.156480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.156488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.156494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.156511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.166453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.166523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.166539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.166546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.166553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.166570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.176484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.176574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.176595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.176607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.176615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.176633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.186438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.186504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.186523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.186537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.186543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.186561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.196561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.196628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.196645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.196652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.141 [2024-11-26 19:20:14.196658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.141 [2024-11-26 19:20:14.196675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.141 qpair failed and we were unable to recover it. 00:29:57.141 [2024-11-26 19:20:14.206577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.141 [2024-11-26 19:20:14.206643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.141 [2024-11-26 19:20:14.206661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.141 [2024-11-26 19:20:14.206668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.206674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.206690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.216527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.216600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.216617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.216624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.216631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.216647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.226629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.226694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.226711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.226718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.226725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.226741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.236650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.236719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.236736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.236743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.236749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.236765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.246663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.246729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.246746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.246753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.246760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.246776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.256745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.256819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.256837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.256844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.256850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.256867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.266745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.266807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.266824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.266831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.266838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.266854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.276777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.276840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.276862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.276870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.276876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.276893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.286812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.286879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.286896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.286903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.286909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.286926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.297054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.297125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.297142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.297149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.297155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.297180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.306904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.306972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.306989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.306997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.307003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.307020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.316873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.316930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.316947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.316960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.316966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.316983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.326929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.327002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.327020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.327027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.327033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.327050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.336997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.337070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.337086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.337093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.337099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.337116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-26 19:20:14.346976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.142 [2024-11-26 19:20:14.347037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.142 [2024-11-26 19:20:14.347058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.142 [2024-11-26 19:20:14.347065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.142 [2024-11-26 19:20:14.347075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.142 [2024-11-26 19:20:14.347092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.356997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.357060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.357079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.357087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.357093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.357111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.367025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.367091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.367109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.367116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.367123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.367139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.377098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.377172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.377190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.377198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.377204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.377222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.387103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.387166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.387182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.387190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.387197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.387213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.397143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.397247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.397264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.397271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.397279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.397296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.407147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.407232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.407249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.407257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.407263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.407280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.417228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.417351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.417370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.417378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.417384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.417401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.427185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.427292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.427310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.427317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.427323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.427341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.437245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.437306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.437323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.437331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.437337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.437354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.447257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.447328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.447347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.447360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.447367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.405 [2024-11-26 19:20:14.447385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.405 qpair failed and we were unable to recover it. 00:29:57.405 [2024-11-26 19:20:14.457350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.405 [2024-11-26 19:20:14.457430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.405 [2024-11-26 19:20:14.457447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.405 [2024-11-26 19:20:14.457454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.405 [2024-11-26 19:20:14.457460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.457477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.467312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.467382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.467398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.467405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.467411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.467428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.477383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.477450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.477467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.477474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.477481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.477497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.487434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.487503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.487520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.487527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.487533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.487550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.497452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.497528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.497546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.497553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.497559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.497575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.507430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.507498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.507519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.507527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.507538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.507556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.517477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.517541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.517559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.517566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.517573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.517589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.527552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.527619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.527637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.527644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.527650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.527668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.537562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.537682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.537701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.537708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.537715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.537732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.547616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.547682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.547699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.547707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.547713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.547730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.557677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.557798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.557816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.557824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.557830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.557847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.567656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.567725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.567744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.567751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.567758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.567775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.577722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.577798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.577817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.577830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.577837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.577854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.587715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.587780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.587797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.587805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.587811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.587828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.597709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.597770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.597792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.597800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.597806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.597824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.406 [2024-11-26 19:20:14.607774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.406 [2024-11-26 19:20:14.607896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.406 [2024-11-26 19:20:14.607914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.406 [2024-11-26 19:20:14.607922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.406 [2024-11-26 19:20:14.607928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.406 [2024-11-26 19:20:14.607945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.406 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.617826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.617904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.617943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.617952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.617960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.617985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.627821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.627943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.627982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.627992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.627999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.628023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.637862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.637946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.637967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.637975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.637981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.637999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.647768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.647834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.647853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.647860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.647866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.647889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.657957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.658032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.658049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.658056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.658062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.658080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.667953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.668019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.668 [2024-11-26 19:20:14.668038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.668 [2024-11-26 19:20:14.668046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.668 [2024-11-26 19:20:14.668052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.668 [2024-11-26 19:20:14.668070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.668 qpair failed and we were unable to recover it. 00:29:57.668 [2024-11-26 19:20:14.677886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.668 [2024-11-26 19:20:14.677950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.677968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.677976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.677982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.677999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.688034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.688106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.688125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.688133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.688139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.688157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.698098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.698184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.698202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.698209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.698216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.698233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.708071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.708134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.708151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.708173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.708180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.708197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.718129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.718219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.718237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.718245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.718251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.718269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.728146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.728239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.728257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.728265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.728271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.728290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.738199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.738275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.738292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.738300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.738306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.738323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.748208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.748276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.748293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.748301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.748307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.748330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.758217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.758282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.758302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.758310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.758320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.758339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.768269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.768335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.768354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.768362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.768369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.768386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.778324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.778397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.778415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.778422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.778429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.778446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.788277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.788343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.788360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.788368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.788374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.788391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.798403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.798467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.798484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.798492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.798499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.798517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.808394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.808468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.808486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.808493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.808500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.808516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.818424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.818491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.818509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.818516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.818523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.818540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.828451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.828513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.828532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.828539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.828546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.828563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.838470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.838533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.838549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.838563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.838570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.838586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.848484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.848584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.848601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.848608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.848615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.669 [2024-11-26 19:20:14.848631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.669 qpair failed and we were unable to recover it. 00:29:57.669 [2024-11-26 19:20:14.858570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.669 [2024-11-26 19:20:14.858634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.669 [2024-11-26 19:20:14.858652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.669 [2024-11-26 19:20:14.858660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.669 [2024-11-26 19:20:14.858666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.670 [2024-11-26 19:20:14.858682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.670 qpair failed and we were unable to recover it. 00:29:57.670 [2024-11-26 19:20:14.868538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.670 [2024-11-26 19:20:14.868596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.670 [2024-11-26 19:20:14.868613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.670 [2024-11-26 19:20:14.868621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.670 [2024-11-26 19:20:14.868627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.670 [2024-11-26 19:20:14.868643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.670 qpair failed and we were unable to recover it. 00:29:57.931 [2024-11-26 19:20:14.878584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.931 [2024-11-26 19:20:14.878652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.931 [2024-11-26 19:20:14.878668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.931 [2024-11-26 19:20:14.878676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.931 [2024-11-26 19:20:14.878683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.931 [2024-11-26 19:20:14.878706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.931 qpair failed and we were unable to recover it. 00:29:57.931 [2024-11-26 19:20:14.888606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.931 [2024-11-26 19:20:14.888683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.931 [2024-11-26 19:20:14.888700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.888707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.888714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.888730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.898639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.898715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.898732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.898740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.898746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.898762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.908671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.908739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.908756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.908764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.908770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.908786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.918691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.918767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.918786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.918793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.918800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.918817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.928733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.928807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.928827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.928835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.928842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.928862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.938741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.938806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.938824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.938832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.938839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.938857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.948766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.948829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.948847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.948856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.948866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.948884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.958826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.958922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.958951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.958959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.958966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.958985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.968860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.968932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.968960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.968974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.968981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.969001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.978901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.978977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.978995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.979002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.979008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.979025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.932 qpair failed and we were unable to recover it. 00:29:57.932 [2024-11-26 19:20:14.988874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.932 [2024-11-26 19:20:14.988943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.932 [2024-11-26 19:20:14.988961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.932 [2024-11-26 19:20:14.988968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.932 [2024-11-26 19:20:14.988975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.932 [2024-11-26 19:20:14.988992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:14.998936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:14.998998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:14.999017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:14.999024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:14.999032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:14.999049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.008949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.009017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.009035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.009042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.009049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.009077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.019064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.019140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.019165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.019173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.019179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.019197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.029032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.029112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.029131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.029138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.029144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.029168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.039046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.039106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.039125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.039132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.039139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.039156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.049085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.049153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.049178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.049185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.049192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.049209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.059036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.059113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.059135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.059143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.059150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.059177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.069155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.069224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.069243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.069251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.069257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.069275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.079190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.079249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.079266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.079274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.079281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.079298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.089226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.089291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.089309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.089316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.089323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.089339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.099277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.099355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.099372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.099385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.099392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.099408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.933 [2024-11-26 19:20:15.109285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.933 [2024-11-26 19:20:15.109346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.933 [2024-11-26 19:20:15.109363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.933 [2024-11-26 19:20:15.109371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.933 [2024-11-26 19:20:15.109378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.933 [2024-11-26 19:20:15.109394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.933 qpair failed and we were unable to recover it. 00:29:57.934 [2024-11-26 19:20:15.119301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.934 [2024-11-26 19:20:15.119360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.934 [2024-11-26 19:20:15.119377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.934 [2024-11-26 19:20:15.119385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.934 [2024-11-26 19:20:15.119392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.934 [2024-11-26 19:20:15.119409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.934 qpair failed and we were unable to recover it. 00:29:57.934 [2024-11-26 19:20:15.129367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.934 [2024-11-26 19:20:15.129446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.934 [2024-11-26 19:20:15.129464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.934 [2024-11-26 19:20:15.129472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.934 [2024-11-26 19:20:15.129478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.934 [2024-11-26 19:20:15.129495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.934 qpair failed and we were unable to recover it. 00:29:57.934 [2024-11-26 19:20:15.139455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.934 [2024-11-26 19:20:15.139558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.934 [2024-11-26 19:20:15.139576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.934 [2024-11-26 19:20:15.139583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.934 [2024-11-26 19:20:15.139591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:57.934 [2024-11-26 19:20:15.139613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.934 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.149392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.149458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.149476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.149483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.149490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.196 [2024-11-26 19:20:15.149507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.196 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.159435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.159496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.159513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.159521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.159527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.196 [2024-11-26 19:20:15.159544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.196 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.169480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.169547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.169564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.169572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.169579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.196 [2024-11-26 19:20:15.169596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.196 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.179515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.179585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.179601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.179609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.179615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.196 [2024-11-26 19:20:15.179631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.196 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.189534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.189598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.189619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.189626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.189633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.196 [2024-11-26 19:20:15.189651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.196 qpair failed and we were unable to recover it. 00:29:58.196 [2024-11-26 19:20:15.199550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.196 [2024-11-26 19:20:15.199635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.196 [2024-11-26 19:20:15.199652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.196 [2024-11-26 19:20:15.199660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.196 [2024-11-26 19:20:15.199666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.199683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.209613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.209682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.209700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.209707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.209714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.209730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.219643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.219722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.219739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.219747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.219753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.219770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.229677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.229750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.229768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.229780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.229787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.229804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.239681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.239746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.239764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.239772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.239778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.239796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.249727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.249847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.249864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.249872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.249879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.249895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.259780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.259855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.259872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.259880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.259886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.259903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.269764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.269826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.269846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.269854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.269865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.269889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.279786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.279871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.279891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.279898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.279905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.279922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.289853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.289931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.289969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.289979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.289987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.290011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.299907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.300020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.300040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.300048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.300056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.300074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.309882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.309948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.197 [2024-11-26 19:20:15.309967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.197 [2024-11-26 19:20:15.309975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.197 [2024-11-26 19:20:15.309981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.197 [2024-11-26 19:20:15.309999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.197 qpair failed and we were unable to recover it. 00:29:58.197 [2024-11-26 19:20:15.320078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.197 [2024-11-26 19:20:15.320141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.320183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.320192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.320198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.320216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.329956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.330030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.330049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.330057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.330063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.330083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.340035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.340100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.340117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.340125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.340132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.340149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.350004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.350064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.350081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.350089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.350096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.350113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.360041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.360107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.360125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.360140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.360146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.360169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.370085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.370150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.370173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.370181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.370188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.370205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.380135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.380258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.380276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.380283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.380290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.380307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.390193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.390263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.390281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.390288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.390295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.390312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.198 [2024-11-26 19:20:15.400050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.198 [2024-11-26 19:20:15.400113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.198 [2024-11-26 19:20:15.400130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.198 [2024-11-26 19:20:15.400137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.198 [2024-11-26 19:20:15.400143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.198 [2024-11-26 19:20:15.400171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.198 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.410229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.410298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.410316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.410323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.410330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.471 [2024-11-26 19:20:15.410347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.471 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.420326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.420426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.420443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.420451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.420458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.471 [2024-11-26 19:20:15.420475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.471 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.430231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.430294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.430312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.430319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.430326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.471 [2024-11-26 19:20:15.430342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.471 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.440289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.440357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.440374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.440381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.440388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.471 [2024-11-26 19:20:15.440405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.471 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.450332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.450413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.450432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.450439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.450446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.471 [2024-11-26 19:20:15.450463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.471 qpair failed and we were unable to recover it. 00:29:58.471 [2024-11-26 19:20:15.460430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.471 [2024-11-26 19:20:15.460544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.471 [2024-11-26 19:20:15.460561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.471 [2024-11-26 19:20:15.460568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.471 [2024-11-26 19:20:15.460575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.460591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.470277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.470343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.470360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.470368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.470374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.470390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.480430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.480490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.480506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.480514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.480521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.480537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.490466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.490536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.490553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.490566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.490572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.490589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.500458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.500527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.500544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.500551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.500557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.500573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.510544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.510606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.510622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.510629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.510636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.510652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.520537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.520607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.520625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.520633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.520639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.520656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.530606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.530677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.530694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.530702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.530708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.530731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.540632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.540708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.540725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.540732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.540739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.540755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.550607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.550703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.550720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.550727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.550733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.550749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.560638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.560695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.560711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.560718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.560725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.560741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.570584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.570654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.570670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.570677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.570683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.570699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.580732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.580811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.580828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.580836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.580843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.472 [2024-11-26 19:20:15.580859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.472 qpair failed and we were unable to recover it. 00:29:58.472 [2024-11-26 19:20:15.590757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.472 [2024-11-26 19:20:15.590840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.472 [2024-11-26 19:20:15.590862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.472 [2024-11-26 19:20:15.590870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.472 [2024-11-26 19:20:15.590876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.590896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.600746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.600810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.600827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.600834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.600840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.600858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.610792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.610865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.610884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.610892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.610903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.610921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.620839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.620922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.620960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.620977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.620984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.621008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.630851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.630916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.630937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.630945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.630952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.630970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.640846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.640918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.640955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.640965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.640972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.640997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.650886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.650956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.650980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.650988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.650995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.651014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.660986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.661062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.661080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.661088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.661094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.661118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.473 [2024-11-26 19:20:15.670997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.473 [2024-11-26 19:20:15.671078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.473 [2024-11-26 19:20:15.671096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.473 [2024-11-26 19:20:15.671104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.473 [2024-11-26 19:20:15.671111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.473 [2024-11-26 19:20:15.671128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.473 qpair failed and we were unable to recover it. 00:29:58.799 [2024-11-26 19:20:15.681001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.799 [2024-11-26 19:20:15.681056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.799 [2024-11-26 19:20:15.681074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.799 [2024-11-26 19:20:15.681081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.799 [2024-11-26 19:20:15.681088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.799 [2024-11-26 19:20:15.681105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.799 qpair failed and we were unable to recover it. 00:29:58.799 [2024-11-26 19:20:15.691062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.799 [2024-11-26 19:20:15.691132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.799 [2024-11-26 19:20:15.691149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.799 [2024-11-26 19:20:15.691156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.799 [2024-11-26 19:20:15.691171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.799 [2024-11-26 19:20:15.691188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.799 qpair failed and we were unable to recover it. 00:29:58.799 [2024-11-26 19:20:15.701073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.799 [2024-11-26 19:20:15.701175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.799 [2024-11-26 19:20:15.701193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.799 [2024-11-26 19:20:15.701201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.799 [2024-11-26 19:20:15.701207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.799 [2024-11-26 19:20:15.701223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.799 qpair failed and we were unable to recover it. 00:29:58.799 [2024-11-26 19:20:15.711090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.799 [2024-11-26 19:20:15.711151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.799 [2024-11-26 19:20:15.711174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.799 [2024-11-26 19:20:15.711181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.799 [2024-11-26 19:20:15.711188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.799 [2024-11-26 19:20:15.711204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.799 qpair failed and we were unable to recover it. 00:29:58.799 [2024-11-26 19:20:15.721102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.799 [2024-11-26 19:20:15.721190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.799 [2024-11-26 19:20:15.721207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.721214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.721221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.721238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.731155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.731230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.731246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.731254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.731260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.731276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.741189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.741253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.741271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.741278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.741288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.741304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.751207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.751274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.751291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.751303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.751309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.751325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.761091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.761140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.761156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.761169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.761175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.761192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.771250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.771309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.771326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.771333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.771339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.771355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.781279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.781342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.781356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.781363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.781369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.781384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.791344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.791400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.791415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.791422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.791428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.791447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.801452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.801502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.801516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.801523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.801529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.801544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.811371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.811428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.811443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.811450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.811456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.811471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.821453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.821523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.821537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.821545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.821551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.821565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.831391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.831465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.831478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.831485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.831492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.831506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.841276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.841322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.841336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.841343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.800 [2024-11-26 19:20:15.841349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.800 [2024-11-26 19:20:15.841363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.800 qpair failed and we were unable to recover it. 00:29:58.800 [2024-11-26 19:20:15.851485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.800 [2024-11-26 19:20:15.851541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.800 [2024-11-26 19:20:15.851555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.800 [2024-11-26 19:20:15.851562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.851568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.851582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.861539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.861629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.861642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.861649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.861656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.861670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.871398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.871463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.871477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.871484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.871490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.871504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.881511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.881574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.881595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.881602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.881608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.881622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.891463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.891519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.891534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.891541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.891547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.891562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.901610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.901663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.901677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.901684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.901690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.901703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.911510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.911574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.911588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.911595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.911601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.911614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.921588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.921669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.921683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.921689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.921696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.921713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.931696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.931792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.931804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.931812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.931818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.931831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.941695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.941742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.941755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.941761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.941768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.941781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.951746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.951820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.951833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.951840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.951846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.951859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.961721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.961772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.961786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.961793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.961799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.961813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.971776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.971859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.971872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.801 [2024-11-26 19:20:15.971879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.801 [2024-11-26 19:20:15.971885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.801 [2024-11-26 19:20:15.971899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.801 qpair failed and we were unable to recover it. 00:29:58.801 [2024-11-26 19:20:15.981791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.801 [2024-11-26 19:20:15.981842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.801 [2024-11-26 19:20:15.981856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-11-26 19:20:15.981865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-11-26 19:20:15.981871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.802 [2024-11-26 19:20:15.981885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-11-26 19:20:15.991828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-11-26 19:20:15.991878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-11-26 19:20:15.991893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-11-26 19:20:15.991901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-11-26 19:20:15.991907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.802 [2024-11-26 19:20:15.991920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-11-26 19:20:16.001830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-11-26 19:20:16.001885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-11-26 19:20:16.001898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-11-26 19:20:16.001905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-11-26 19:20:16.001912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:58.802 [2024-11-26 19:20:16.001925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.802 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-26 19:20:16.011906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-26 19:20:16.011964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-26 19:20:16.011981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-26 19:20:16.011988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-26 19:20:16.011995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.085 [2024-11-26 19:20:16.012009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-26 19:20:16.021862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-26 19:20:16.021918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-26 19:20:16.021932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-26 19:20:16.021939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-26 19:20:16.021945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.085 [2024-11-26 19:20:16.021959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.085 qpair failed and we were unable to recover it. 00:29:59.085 [2024-11-26 19:20:16.031952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.085 [2024-11-26 19:20:16.032004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.085 [2024-11-26 19:20:16.032017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.085 [2024-11-26 19:20:16.032025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.085 [2024-11-26 19:20:16.032031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.032044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.041937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.041986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.042001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.042008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.042014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.042031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.052031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.052106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.052120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.052127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.052133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.052150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.062021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.062082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.062096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.062102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.062109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.062122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.072046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.072099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.072112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.072119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.072125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.072138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.082057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.082113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.082126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.082133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.082139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.082153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.092113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.092168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.092182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.092190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.092196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.092210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.102103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.102156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.102173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.102181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.102187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.102201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.112186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.112239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.112252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.112259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.112266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.112279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.122163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.122210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.122224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.122231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.122237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.122251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.132238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.132291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.132304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.132311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.132318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.132331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.142230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.142297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.142312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.142319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.142326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.142339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.152259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.152307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.152320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.152327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.152333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.086 [2024-11-26 19:20:16.152347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.086 qpair failed and we were unable to recover it. 00:29:59.086 [2024-11-26 19:20:16.162273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.086 [2024-11-26 19:20:16.162317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.086 [2024-11-26 19:20:16.162330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.086 [2024-11-26 19:20:16.162336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.086 [2024-11-26 19:20:16.162342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.162356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.172349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.172403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.172416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.172423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.172429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.172443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.182345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.182399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.182413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.182420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.182426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.182443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.192366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.192430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.192444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.192451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.192457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.192471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.202366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.202417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.202430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.202437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.202443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.202457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.212457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.212509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.212523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.212530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.212536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.212549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.222467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.222515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.222530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.222537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.222543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.222557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.232457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.232501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.232514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.232521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.232527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.232541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.242490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.242534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.242547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.242554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.242560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.242573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.252570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.252624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.252637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.252644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.252650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.252664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.262551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.262598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.262611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.262619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.262625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.262638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.272569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.272618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.272634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.272642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.272649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.272664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.282605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.282651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.282664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.282671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.282677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.282691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.087 [2024-11-26 19:20:16.292666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.087 [2024-11-26 19:20:16.292722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.087 [2024-11-26 19:20:16.292735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.087 [2024-11-26 19:20:16.292742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.087 [2024-11-26 19:20:16.292749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.087 [2024-11-26 19:20:16.292762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.087 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.302685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.302737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.302750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.302757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.302763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.302777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.312680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.312732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.312745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.312752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.312761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.312775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.322710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.322752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.322766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.322773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.322779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.322792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.332793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.332848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.332861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.332868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.332874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.332888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.342757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.342808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.342821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.342828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.342835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.342848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.352791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.352837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.352850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.349 [2024-11-26 19:20:16.352857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.349 [2024-11-26 19:20:16.352864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.349 [2024-11-26 19:20:16.352877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.349 qpair failed and we were unable to recover it. 00:29:59.349 [2024-11-26 19:20:16.362818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.349 [2024-11-26 19:20:16.362864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.349 [2024-11-26 19:20:16.362877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.362884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.362890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.362904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.372901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.372961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.372987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.372995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.373002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.373021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.382891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.382950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.382975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.382984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.382991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.383009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.392904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.392957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.392973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.392981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.392987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.393002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.402892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.402940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.402957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.402965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.402971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.402985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.412997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.413052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.413066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.413073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.413080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.413094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.422878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.422930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.422943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.422950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.422957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.422971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.433008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.433069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.433082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.433090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.433096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.433110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.443008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.443057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.443070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.443078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.443087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.443102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.453112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.453173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.453187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.453194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.453201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.453215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.463078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.463134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.463147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.463154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.463164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.463178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.473123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.473178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.473192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.473199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.350 [2024-11-26 19:20:16.473205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.350 [2024-11-26 19:20:16.473220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.350 qpair failed and we were unable to recover it. 00:29:59.350 [2024-11-26 19:20:16.483142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.350 [2024-11-26 19:20:16.483192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.350 [2024-11-26 19:20:16.483206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.350 [2024-11-26 19:20:16.483213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.483220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.483234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.493216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.493271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.493284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.493291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.493298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.493312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.503211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.503264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.503277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.503284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.503290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.503304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.513184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.513232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.513246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.513253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.513259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.513273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.523256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.523304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.523318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.523326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.523333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.523347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.533329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.533385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.533405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.533412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.533418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.533432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.543311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.543368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.543381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.543388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.543394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.543408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.351 [2024-11-26 19:20:16.553312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.351 [2024-11-26 19:20:16.553361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.351 [2024-11-26 19:20:16.553374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.351 [2024-11-26 19:20:16.553381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.351 [2024-11-26 19:20:16.553388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.351 [2024-11-26 19:20:16.553402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.351 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-26 19:20:16.563412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-26 19:20:16.563497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-26 19:20:16.563510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-26 19:20:16.563518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-26 19:20:16.563524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.617 [2024-11-26 19:20:16.563538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-26 19:20:16.573437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.617 [2024-11-26 19:20:16.573490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.617 [2024-11-26 19:20:16.573503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.617 [2024-11-26 19:20:16.573510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.617 [2024-11-26 19:20:16.573520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.617 [2024-11-26 19:20:16.573534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.617 qpair failed and we were unable to recover it. 00:29:59.617 [2024-11-26 19:20:16.583413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.583490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.583504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.583511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.583518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.583531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.593442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.593489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.593505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.593511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.593518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.593532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.603442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.603487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.603501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.603508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.603514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.603527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.613602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.613664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.613678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.613685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.613691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.613705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.623559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.623608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.623622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.623629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.623635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.623649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.633567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.633660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.633673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.633680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.633686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.633700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.643463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.643554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.643566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.643573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.643579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.643593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.653672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.653726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.653739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.653746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.653753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.653766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.663639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.663695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.663712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.663719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.663726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.663739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.673789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.673837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.673851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.673858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.673864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.673879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.683706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.683757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.683770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.683777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.683783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.683797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.693648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.693701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.693715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.693722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.693728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.693742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.703769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.703823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.703836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.703842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.703852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.703866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.713759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.713807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.713820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.713827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.713833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.713847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.723787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.723832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.723845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.723852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.723859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.723872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.733748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.733812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.733825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.733832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.733838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.733851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.743861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.743923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.743948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.743956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.743963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.743982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.753870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.753918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.753934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.753942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.618 [2024-11-26 19:20:16.753948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.618 [2024-11-26 19:20:16.753963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.618 qpair failed and we were unable to recover it. 00:29:59.618 [2024-11-26 19:20:16.763905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.618 [2024-11-26 19:20:16.764005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.618 [2024-11-26 19:20:16.764020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.618 [2024-11-26 19:20:16.764027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.764034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.764048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.773885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.773938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.773952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.773960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.773968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.773982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.783973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.784024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.784038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.784045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.784051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.784065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.793872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.793930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.793948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.793956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.793962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.793977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.804014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.804064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.804078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.804085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.804092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.804105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.814087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.814138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.814152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.814163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.814170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.814184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.619 [2024-11-26 19:20:16.823975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.619 [2024-11-26 19:20:16.824023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.619 [2024-11-26 19:20:16.824037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.619 [2024-11-26 19:20:16.824044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.619 [2024-11-26 19:20:16.824050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.619 [2024-11-26 19:20:16.824064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.619 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.834093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.834186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.834200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.834208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.834217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.834232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.844123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.844179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.844192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.844199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.844206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.844219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.854199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.854294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.854307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.854315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.854321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.854335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.864161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.864215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.864228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.864235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.864241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.864255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.874195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.874246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.874260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.874267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.874274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.874288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.884227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.884274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.884288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.884294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.884301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.884315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.894311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.894365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.894377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.894384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.894391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.894404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.904218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.904276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.904289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.904296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.904302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.904316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.914315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.914408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.914422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.914429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.914435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.880 [2024-11-26 19:20:16.914448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.880 qpair failed and we were unable to recover it. 00:29:59.880 [2024-11-26 19:20:16.924314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.880 [2024-11-26 19:20:16.924382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.880 [2024-11-26 19:20:16.924399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.880 [2024-11-26 19:20:16.924406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.880 [2024-11-26 19:20:16.924412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.924426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.934445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.934497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.934511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.934518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.934525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.934538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.944419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.944507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.944521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.944528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.944534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.944548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.954475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.954553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.954566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.954574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.954580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.954593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.964440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.964489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.964502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.964510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.964519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.964534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.974522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.974580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.974594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.974601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.974607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.974620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.984526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.984581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.984594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.984602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.984608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.984622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:16.994553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:16.994602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:16.994616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:16.994623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:16.994629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:16.994642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:17.004445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:17.004494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:17.004508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:17.004515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:17.004521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:17.004535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:17.014625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:17.014678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:17.014692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:17.014699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:17.014706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:17.014719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:17.024604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:17.024657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:17.024672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:17.024679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:17.024685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:17.024699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:17.034673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:17.034724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.881 [2024-11-26 19:20:17.034737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.881 [2024-11-26 19:20:17.034744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.881 [2024-11-26 19:20:17.034751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.881 [2024-11-26 19:20:17.034765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.881 qpair failed and we were unable to recover it. 00:29:59.881 [2024-11-26 19:20:17.044669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.881 [2024-11-26 19:20:17.044738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-26 19:20:17.044751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-26 19:20:17.044758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-26 19:20:17.044764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.882 [2024-11-26 19:20:17.044778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-26 19:20:17.054765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-26 19:20:17.054820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-26 19:20:17.054837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-26 19:20:17.054844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-26 19:20:17.054850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.882 [2024-11-26 19:20:17.054864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-26 19:20:17.064755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-26 19:20:17.064807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-26 19:20:17.064820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-26 19:20:17.064827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-26 19:20:17.064834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.882 [2024-11-26 19:20:17.064847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-26 19:20:17.074750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-26 19:20:17.074796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-26 19:20:17.074810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-26 19:20:17.074817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-26 19:20:17.074824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.882 [2024-11-26 19:20:17.074837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.882 qpair failed and we were unable to recover it. 00:29:59.882 [2024-11-26 19:20:17.084794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.882 [2024-11-26 19:20:17.084844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.882 [2024-11-26 19:20:17.084858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.882 [2024-11-26 19:20:17.084865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.882 [2024-11-26 19:20:17.084872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:29:59.882 [2024-11-26 19:20:17.084889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.882 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.094913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.095020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.095035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.095043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.095053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.095068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.104826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.104918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.104931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.104938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.104945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.104959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.114896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.114946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.114959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.114967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.114973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.114986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.124904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.124950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.124963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.124971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.124977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.124991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.134989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.135046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.135060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.135066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.135073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.135086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.144978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.145030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.145044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.145051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.145057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.145070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.154983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.155030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.155043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.155050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.155057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.155070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.164996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.165043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.165057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.165064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.165070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.165083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.175086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.175140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.175154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.175164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.175171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.175184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.185393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.185448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.185469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.185476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.185482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.185496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.195064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.195110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.195123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.195130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.195136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.195150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.205088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.205137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.205150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.205157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.205168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.205182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.215189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.215242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.215255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.215263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.215269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.215282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.225203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.225254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.225267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.225275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.225284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.225298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.235207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.235256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.235270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.235277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.235284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.235297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.144 [2024-11-26 19:20:17.245235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.144 [2024-11-26 19:20:17.245281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.144 [2024-11-26 19:20:17.245296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.144 [2024-11-26 19:20:17.245303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.144 [2024-11-26 19:20:17.245310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.144 [2024-11-26 19:20:17.245324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.144 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.255305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.255387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.255400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.255408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.255414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.255428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.265301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.265349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.265363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.265370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.265376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.265390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.275315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.275366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.275379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.275387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.275395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.275409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.285343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.285397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.285411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.285417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.285424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.285437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.295426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.295481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.295494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.295501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.295507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.295520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.305440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.305505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.305517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.305524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.305531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.305544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.315447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.315493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.315510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.315517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.315524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.315537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.325439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.325508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.325521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.325528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.325534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.325548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.335531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.335584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.335597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.335604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.335611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.335624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.145 [2024-11-26 19:20:17.345524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.145 [2024-11-26 19:20:17.345575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.145 [2024-11-26 19:20:17.345588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.145 [2024-11-26 19:20:17.345595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.145 [2024-11-26 19:20:17.345601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.145 [2024-11-26 19:20:17.345615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.145 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.355531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.355585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.355599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.355606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.355615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.355628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.365561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.365620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.365634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.365641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.365647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.365660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.375626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.375683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.375698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.375705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.375712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.375729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.385627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.385678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.385692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.385699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.385705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.385718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.395638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.395688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.395701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.395708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.395714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.395727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.405704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.405792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.405805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.405812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.405819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.405832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.415742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.415794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.415807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.415814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.415820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.415833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.425715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.425762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.425775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.407 [2024-11-26 19:20:17.425782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.407 [2024-11-26 19:20:17.425789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.407 [2024-11-26 19:20:17.425802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.407 qpair failed and we were unable to recover it. 00:30:00.407 [2024-11-26 19:20:17.435761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.407 [2024-11-26 19:20:17.435835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.407 [2024-11-26 19:20:17.435848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.435855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.435861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.435874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.445793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.445840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.445857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.445864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.445870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.445883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.455858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.455925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.455950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.455959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.455966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.455985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.465857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.465919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.465944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.465953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.465960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.465978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.475864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.475909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.475925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.475932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.475938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.475953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.485889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.485972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.485986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.485993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.486004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.486018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.495956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.496008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.496022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.496029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.496035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.496049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.505858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.505912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.505925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.505932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.505938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.505952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.515974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.516019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.516032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.516039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.516046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.516059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.525985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.526043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.526057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.526064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.526071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.526084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.536033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.536084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.536097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.536104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.536110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.536124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.546049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.546138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.546151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.546162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.546169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.546183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.556058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.556108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.556121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.556128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.556134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.556147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.566136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.566223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.566237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.566244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.566250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.566264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.408 [2024-11-26 19:20:17.576175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.408 [2024-11-26 19:20:17.576227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.408 [2024-11-26 19:20:17.576243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.408 [2024-11-26 19:20:17.576251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.408 [2024-11-26 19:20:17.576257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.408 [2024-11-26 19:20:17.576270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.408 qpair failed and we were unable to recover it. 00:30:00.409 [2024-11-26 19:20:17.586181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.409 [2024-11-26 19:20:17.586229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.409 [2024-11-26 19:20:17.586242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.409 [2024-11-26 19:20:17.586249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.409 [2024-11-26 19:20:17.586255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.409 [2024-11-26 19:20:17.586270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.409 qpair failed and we were unable to recover it. 00:30:00.409 [2024-11-26 19:20:17.596183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.409 [2024-11-26 19:20:17.596228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.409 [2024-11-26 19:20:17.596243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.409 [2024-11-26 19:20:17.596250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.409 [2024-11-26 19:20:17.596256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.409 [2024-11-26 19:20:17.596271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.409 qpair failed and we were unable to recover it. 00:30:00.409 [2024-11-26 19:20:17.606169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.409 [2024-11-26 19:20:17.606214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.409 [2024-11-26 19:20:17.606227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.409 [2024-11-26 19:20:17.606234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.409 [2024-11-26 19:20:17.606241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.409 [2024-11-26 19:20:17.606254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.409 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.616286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.616374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.616387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.616394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.616404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.616418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.626298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.626346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.626360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.626367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.626373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.626387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.636294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.636352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.636365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.636372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.636378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.636392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.646298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.646349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.646362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.646369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.646375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.646388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.656407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.656464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.656477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.656484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.656490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.656504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.666387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.666439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.666452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.666459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.666465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.666479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.676405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.676471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.676484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.676491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.676497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.676510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.686429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.686523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.686535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.686542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.686549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.686562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.696509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.696562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.696576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.696583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.696590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.696604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.706507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.706589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.706605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.670 [2024-11-26 19:20:17.706612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.670 [2024-11-26 19:20:17.706618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.670 [2024-11-26 19:20:17.706632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.670 qpair failed and we were unable to recover it. 00:30:00.670 [2024-11-26 19:20:17.716486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.670 [2024-11-26 19:20:17.716533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.670 [2024-11-26 19:20:17.716546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.716553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.716559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.716573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.726532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.726579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.726592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.726599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.726605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.726619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.736625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.736706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.736719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.736726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.736732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.736746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.746666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.746715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.746728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.746735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.746745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.746758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.756631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.756677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.756690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.756697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.756703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.756717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.766648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.766698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.766711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.766717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.766724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.766737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.776733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.776788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.776802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.776809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.776815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.776829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.786714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.786776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.786788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.786795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.786802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.786815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.796733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.796780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.796793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.796800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.796806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.796819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.806767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.806822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.806835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.806842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.806848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.806861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.816829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.816882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.816896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.816903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.816909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.816922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.826843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.826904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.826929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.826938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.826945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.826964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.836867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.836923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.836955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.671 [2024-11-26 19:20:17.836964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.671 [2024-11-26 19:20:17.836971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.671 [2024-11-26 19:20:17.836990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.671 qpair failed and we were unable to recover it. 00:30:00.671 [2024-11-26 19:20:17.846895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.671 [2024-11-26 19:20:17.846955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.671 [2024-11-26 19:20:17.846980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.672 [2024-11-26 19:20:17.846989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.672 [2024-11-26 19:20:17.846996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.672 [2024-11-26 19:20:17.847015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.672 qpair failed and we were unable to recover it. 00:30:00.672 [2024-11-26 19:20:17.856945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.672 [2024-11-26 19:20:17.856999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.672 [2024-11-26 19:20:17.857015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.672 [2024-11-26 19:20:17.857022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.672 [2024-11-26 19:20:17.857028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.672 [2024-11-26 19:20:17.857043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.672 qpair failed and we were unable to recover it. 00:30:00.672 [2024-11-26 19:20:17.866954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.672 [2024-11-26 19:20:17.867038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.672 [2024-11-26 19:20:17.867052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.672 [2024-11-26 19:20:17.867059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.672 [2024-11-26 19:20:17.867065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.672 [2024-11-26 19:20:17.867079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.672 qpair failed and we were unable to recover it. 00:30:00.672 [2024-11-26 19:20:17.876962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.672 [2024-11-26 19:20:17.877039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.672 [2024-11-26 19:20:17.877053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.672 [2024-11-26 19:20:17.877060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.672 [2024-11-26 19:20:17.877071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.672 [2024-11-26 19:20:17.877085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.672 qpair failed and we were unable to recover it. 00:30:00.934 [2024-11-26 19:20:17.886973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.934 [2024-11-26 19:20:17.887019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.934 [2024-11-26 19:20:17.887033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.934 [2024-11-26 19:20:17.887040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.934 [2024-11-26 19:20:17.887046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.934 [2024-11-26 19:20:17.887060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.934 qpair failed and we were unable to recover it. 00:30:00.934 [2024-11-26 19:20:17.897062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.934 [2024-11-26 19:20:17.897115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.934 [2024-11-26 19:20:17.897128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.934 [2024-11-26 19:20:17.897135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.934 [2024-11-26 19:20:17.897142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.934 [2024-11-26 19:20:17.897156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.906958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.907017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.907030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.907037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.907043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.907057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.917075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.917133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.917146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.917153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.917163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.917177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.927108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.927156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.927172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.927179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.927186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.927200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.937176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.937231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.937245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.937252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.937258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.937272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.947188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.947236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.947250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.947257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.947263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.947277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.957057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.957108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.957121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.957129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.957135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.957148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.967209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.967253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.967270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.967277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.967283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.967297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.977289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.977341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.977354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.977361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.977368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.977381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.987286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.987337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.987350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.987357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.987364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.987378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:17.997289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:17.997336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:17.997349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:17.997356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:17.997362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:17.997375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:18.007285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:18.007333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:18.007345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:18.007353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:18.007363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:18.007376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:18.017409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:18.017463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:18.017476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:18.017483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:18.017489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:18.017503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.935 qpair failed and we were unable to recover it. 00:30:00.935 [2024-11-26 19:20:18.027387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.935 [2024-11-26 19:20:18.027440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.935 [2024-11-26 19:20:18.027454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.935 [2024-11-26 19:20:18.027462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.935 [2024-11-26 19:20:18.027469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.935 [2024-11-26 19:20:18.027483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.037390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.037440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.037454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.037461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.037467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.037481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.047429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.047518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.047531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.047539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.047545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.047558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.057497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.057553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.057566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.057573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.057580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.057593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.067507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.067555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.067568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.067575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.067581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.067594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.077477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.077550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.077563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.077570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.077576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.077591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.087510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.087557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.087570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.087577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.087583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.087596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.097531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.097594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.097610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.097617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.097623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.097636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.107579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.107650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.107663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.107670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.107676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.107689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.117598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.117644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.117660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.117667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.117673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.117691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.127624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.127673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.127687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.127694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.127700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.127714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:00.936 [2024-11-26 19:20:18.137668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.936 [2024-11-26 19:20:18.137747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.936 [2024-11-26 19:20:18.137760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.936 [2024-11-26 19:20:18.137771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.936 [2024-11-26 19:20:18.137778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:00.936 [2024-11-26 19:20:18.137792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.936 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.147682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.147730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.198 [2024-11-26 19:20:18.147743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.198 [2024-11-26 19:20:18.147751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.198 [2024-11-26 19:20:18.147757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.198 [2024-11-26 19:20:18.147770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.198 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.157667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.157716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.198 [2024-11-26 19:20:18.157730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.198 [2024-11-26 19:20:18.157737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.198 [2024-11-26 19:20:18.157745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.198 [2024-11-26 19:20:18.157759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.198 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.167740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.167789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.198 [2024-11-26 19:20:18.167802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.198 [2024-11-26 19:20:18.167809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.198 [2024-11-26 19:20:18.167816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.198 [2024-11-26 19:20:18.167829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.198 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.177925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.177972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.198 [2024-11-26 19:20:18.177986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.198 [2024-11-26 19:20:18.177994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.198 [2024-11-26 19:20:18.178000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.198 [2024-11-26 19:20:18.178014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.198 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.187688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.187745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.198 [2024-11-26 19:20:18.187769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.198 [2024-11-26 19:20:18.187782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.198 [2024-11-26 19:20:18.187791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.198 [2024-11-26 19:20:18.187811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.198 qpair failed and we were unable to recover it. 00:30:01.198 [2024-11-26 19:20:18.197790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.198 [2024-11-26 19:20:18.197837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.197852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.197860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.197866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.197881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.207845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.207894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.207920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.207929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.207936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.207955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.217758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.217808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.217827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.217834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.217841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.217857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.227892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.227956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.227974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.227981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.227988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.228002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.237901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.237947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.237973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.237981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.237988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.238007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.247947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.247998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.248023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.248032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.248039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.248058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.258027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.258085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.258101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.258108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.258115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.258130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.267891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.267942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.267956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.267968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.267975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.267989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.278040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.278082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.278095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.278103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.278111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.278125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.288072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.288117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.288132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.288139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.288145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.288163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.298090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.298171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.298184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.298191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.298198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.298211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.307998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.308047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.308060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.308068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.308074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.308088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.318101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.318147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.318166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.199 [2024-11-26 19:20:18.318174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.199 [2024-11-26 19:20:18.318180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.199 [2024-11-26 19:20:18.318194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.199 qpair failed and we were unable to recover it. 00:30:01.199 [2024-11-26 19:20:18.328194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.199 [2024-11-26 19:20:18.328238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.199 [2024-11-26 19:20:18.328253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.328260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.328266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.328280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.338211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.338257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.338270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.338277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.338284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.338297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.348297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.348347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.348360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.348368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.348374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.348387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.358246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.358293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.358310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.358317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.358323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.358337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.368260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.368304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.368318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.368325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.368331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.368344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.378345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.378393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.378406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.378414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.378420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.378433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.388343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.388395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.388409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.388416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.388422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.388436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.200 [2024-11-26 19:20:18.398384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.200 [2024-11-26 19:20:18.398427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.200 [2024-11-26 19:20:18.398440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.200 [2024-11-26 19:20:18.398451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.200 [2024-11-26 19:20:18.398457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.200 [2024-11-26 19:20:18.398471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.200 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.408402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.408449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.408462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.408469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.408476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.408489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.418432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.418477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.418490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.418497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.418504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.418517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.428444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.428490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.428504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.428511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.428517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.428531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.438484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.438558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.438571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.438578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.438584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.438598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.448473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.448553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.448566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.448573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.448580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.448594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.458493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.458538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.458551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.458558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.458565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.458578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.468566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.462 [2024-11-26 19:20:18.468612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.462 [2024-11-26 19:20:18.468626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.462 [2024-11-26 19:20:18.468633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.462 [2024-11-26 19:20:18.468639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.462 [2024-11-26 19:20:18.468653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.462 qpair failed and we were unable to recover it. 00:30:01.462 [2024-11-26 19:20:18.478435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.478484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.478497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.478504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.478510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.478523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.488563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.488614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.488628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.488635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.488641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.488654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.498617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.498662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.498675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.498682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.498688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.498702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.508650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.508707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.508720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.508727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.508733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.508746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.518668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.518717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.518731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.518738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.518744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.518757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.528694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.528742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.528756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.528770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.528776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.528790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.538738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.538789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.538802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.538809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.538816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.538830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.548739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.548786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.548799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.548806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.548813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.548826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.558744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.558814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.558827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.558834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.558841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.558854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.568801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.568886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.568900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.568907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.568913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.568927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.578824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.578873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.578898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.578907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.578914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.578933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.588886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.588940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.588965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.588973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.588981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.589000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.598872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.598936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.463 [2024-11-26 19:20:18.598962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.463 [2024-11-26 19:20:18.598971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.463 [2024-11-26 19:20:18.598978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.463 [2024-11-26 19:20:18.598997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.463 qpair failed and we were unable to recover it. 00:30:01.463 [2024-11-26 19:20:18.608910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.463 [2024-11-26 19:20:18.608955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.608971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.608978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.608985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.609000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.618940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.618997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.619022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.619031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.619038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.619058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.628971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.629022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.629038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.629045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.629052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.629067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.639006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.639050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.639064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.639071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.639077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.639092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.649054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.649100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.649113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.649121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.649127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.649141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.659060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.659109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.659123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.659135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.659141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.659155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.464 [2024-11-26 19:20:18.669088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.464 [2024-11-26 19:20:18.669136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.464 [2024-11-26 19:20:18.669150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.464 [2024-11-26 19:20:18.669157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.464 [2024-11-26 19:20:18.669168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.464 [2024-11-26 19:20:18.669181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.464 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.679246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.679293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.679306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.679314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.679320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.679334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.689129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.689190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.689203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.689210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.689217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.689231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.699170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.699217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.699230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.699237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.699244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.699257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.709178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.709227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.709240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.709247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.709254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.709267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.719222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.719266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.719279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.719286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.719292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.719306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.729227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.729271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.729285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.729292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.729298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.729312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.739257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.739302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.739315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.739323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.739329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.739343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.727 [2024-11-26 19:20:18.749264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.727 [2024-11-26 19:20:18.749331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.727 [2024-11-26 19:20:18.749344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.727 [2024-11-26 19:20:18.749351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.727 [2024-11-26 19:20:18.749358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.727 [2024-11-26 19:20:18.749371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.727 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.759322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.759367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.759381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.759388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.759394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.759408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.769242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.769286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.769299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.769306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.769312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.769326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.779353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.779448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.779461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.779468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.779475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.779488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.789414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.789460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.789473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.789484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.789490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.789504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.799420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.799487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.799500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.799507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.799514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.799527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.809448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.809490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.809503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.809510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.809517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.809530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.819490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.819537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.819550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.819557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.819563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.819576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.829521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.829573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.829586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.829593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.829600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.829616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.839529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.839574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.839587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.839595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.839601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.839614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.849429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.849471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.849485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.849491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.849498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.849511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.859589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.859633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.859646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.859653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.859660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.859673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.869607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.869657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.869670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.728 [2024-11-26 19:20:18.869677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.728 [2024-11-26 19:20:18.869684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.728 [2024-11-26 19:20:18.869697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.728 qpair failed and we were unable to recover it. 00:30:01.728 [2024-11-26 19:20:18.879609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.728 [2024-11-26 19:20:18.879652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.728 [2024-11-26 19:20:18.879665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.879672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.879679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.879692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.729 [2024-11-26 19:20:18.889652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.729 [2024-11-26 19:20:18.889696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.729 [2024-11-26 19:20:18.889709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.889716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.889722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.889736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.729 [2024-11-26 19:20:18.899685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.729 [2024-11-26 19:20:18.899731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.729 [2024-11-26 19:20:18.899744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.899751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.899758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.899771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.729 [2024-11-26 19:20:18.909732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.729 [2024-11-26 19:20:18.909786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.729 [2024-11-26 19:20:18.909800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.909807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.909813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.909827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.729 [2024-11-26 19:20:18.919738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.729 [2024-11-26 19:20:18.919778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.729 [2024-11-26 19:20:18.919792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.919802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.919808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.919822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.729 [2024-11-26 19:20:18.929768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.729 [2024-11-26 19:20:18.929813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.729 [2024-11-26 19:20:18.929826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.729 [2024-11-26 19:20:18.929833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.729 [2024-11-26 19:20:18.929840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.729 [2024-11-26 19:20:18.929853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.729 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.939668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.939723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.939736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.939743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.939750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.939763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.949825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.949880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.949895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.949903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.949910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.949927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.959839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.959882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.959896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.959903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.959910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.959927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.969735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.969829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.969843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.969850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.969856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.969870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.979898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.979947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.979960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.979967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.979973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.979987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.989912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.989959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:18.989972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.991 [2024-11-26 19:20:18.989979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.991 [2024-11-26 19:20:18.989986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.991 [2024-11-26 19:20:18.989999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.991 qpair failed and we were unable to recover it. 00:30:01.991 [2024-11-26 19:20:18.999955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.991 [2024-11-26 19:20:18.999999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.991 [2024-11-26 19:20:19.000012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.000019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.000026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.000039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.009984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.010028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.010042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.010049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.010055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.010069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.019875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.019921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.019934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.019941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.019948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.019961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.030038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.030088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.030102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.030110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.030117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.030132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.040052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.040098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.040111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.040119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.040125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.040138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.050084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.050178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.050191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.050202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.050208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.050222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.060067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.060120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.060133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.060140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.060146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.060163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.070111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.070162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.070176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.070183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.070189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.070202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.080125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.080173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.080186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.080193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.080199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13780c0 00:30:01.992 [2024-11-26 19:20:19.080213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.090180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.090270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.090335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.090361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.090382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd608000b90 00:30:01.992 [2024-11-26 19:20:19.090451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.100203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.992 [2024-11-26 19:20:19.100269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.992 [2024-11-26 19:20:19.100299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.992 [2024-11-26 19:20:19.100314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.992 [2024-11-26 19:20:19.100329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd608000b90 00:30:01.992 [2024-11-26 19:20:19.100363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:01.992 qpair failed and we were unable to recover it. 00:30:01.992 [2024-11-26 19:20:19.100784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136de10 is same with the state(6) to be set 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Write completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Write completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Write completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Write completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.992 Read completed with error (sct=0, sc=8) 00:30:01.992 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 [2024-11-26 19:20:19.101745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.993 [2024-11-26 19:20:19.110245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.993 [2024-11-26 19:20:19.110358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.993 [2024-11-26 19:20:19.110406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.993 [2024-11-26 19:20:19.110429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.993 [2024-11-26 19:20:19.110451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd600000b90 00:30:01.993 [2024-11-26 19:20:19.110509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.993 qpair failed and we were unable to recover it. 00:30:01.993 [2024-11-26 19:20:19.120258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.993 [2024-11-26 19:20:19.120327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.993 [2024-11-26 19:20:19.120355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.993 [2024-11-26 19:20:19.120371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.993 [2024-11-26 19:20:19.120385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd600000b90 00:30:01.993 [2024-11-26 19:20:19.120417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.993 qpair failed and we were unable to recover it. 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Write completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 Read completed with error (sct=0, sc=8) 00:30:01.993 starting I/O failed 00:30:01.993 [2024-11-26 19:20:19.121355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.993 [2024-11-26 19:20:19.130299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.993 [2024-11-26 19:20:19.130380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.993 [2024-11-26 19:20:19.130428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.993 [2024-11-26 19:20:19.130451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.993 [2024-11-26 19:20:19.130471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd5fc000b90 00:30:01.993 [2024-11-26 19:20:19.130530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.993 qpair failed and we were unable to recover it. 00:30:01.993 [2024-11-26 19:20:19.140303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.993 [2024-11-26 19:20:19.140368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.993 [2024-11-26 19:20:19.140396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.993 [2024-11-26 19:20:19.140411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.993 [2024-11-26 19:20:19.140426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd5fc000b90 00:30:01.993 [2024-11-26 19:20:19.140458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:01.993 qpair failed and we were unable to recover it. 00:30:01.993 [2024-11-26 19:20:19.140937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136de10 (9): Bad file descriptor 00:30:01.993 Initializing NVMe Controllers 00:30:01.993 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:01.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:01.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:01.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:01.993 Initialization complete. Launching workers. 00:30:01.993 Starting thread on core 1 00:30:01.993 Starting thread on core 2 00:30:01.993 Starting thread on core 3 00:30:01.993 Starting thread on core 0 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:01.993 00:30:01.993 real 0m11.379s 00:30:01.993 user 0m21.947s 00:30:01.993 sys 0m4.023s 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.993 ************************************ 00:30:01.993 END TEST nvmf_target_disconnect_tc2 00:30:01.993 ************************************ 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:01.993 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.253 rmmod nvme_tcp 00:30:02.253 rmmod nvme_fabrics 00:30:02.253 rmmod nvme_keyring 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3133597 ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3133597 ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133597' 00:30:02.253 killing process with pid 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3133597 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.253 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.513 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.513 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.513 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.513 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.513 19:20:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:04.425 00:30:04.425 real 0m21.917s 00:30:04.425 user 0m49.574s 00:30:04.425 sys 0m10.348s 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.425 ************************************ 00:30:04.425 END TEST nvmf_target_disconnect 00:30:04.425 ************************************ 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:04.425 00:30:04.425 real 6m34.412s 00:30:04.425 user 11m23.976s 00:30:04.425 sys 2m17.022s 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.425 19:20:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.425 ************************************ 00:30:04.425 END TEST nvmf_host 00:30:04.425 ************************************ 00:30:04.425 19:20:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:04.425 19:20:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:04.425 19:20:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:04.425 19:20:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.425 19:20:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.425 19:20:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:04.686 ************************************ 00:30:04.686 START TEST nvmf_target_core_interrupt_mode 00:30:04.686 ************************************ 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:04.686 * Looking for test storage... 00:30:04.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.686 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.686 --rc genhtml_branch_coverage=1 00:30:04.686 --rc genhtml_function_coverage=1 00:30:04.686 --rc genhtml_legend=1 00:30:04.686 --rc geninfo_all_blocks=1 00:30:04.686 --rc geninfo_unexecuted_blocks=1 00:30:04.686 00:30:04.686 ' 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.687 --rc genhtml_branch_coverage=1 00:30:04.687 --rc genhtml_function_coverage=1 00:30:04.687 --rc genhtml_legend=1 00:30:04.687 --rc geninfo_all_blocks=1 00:30:04.687 --rc geninfo_unexecuted_blocks=1 00:30:04.687 00:30:04.687 ' 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.687 --rc genhtml_branch_coverage=1 00:30:04.687 --rc genhtml_function_coverage=1 00:30:04.687 --rc genhtml_legend=1 00:30:04.687 --rc geninfo_all_blocks=1 00:30:04.687 --rc geninfo_unexecuted_blocks=1 00:30:04.687 00:30:04.687 ' 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.687 --rc genhtml_branch_coverage=1 00:30:04.687 --rc genhtml_function_coverage=1 00:30:04.687 --rc genhtml_legend=1 00:30:04.687 --rc geninfo_all_blocks=1 00:30:04.687 --rc geninfo_unexecuted_blocks=1 00:30:04.687 00:30:04.687 ' 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.687 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.948 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:04.948 ************************************ 00:30:04.948 START TEST nvmf_abort 00:30:04.948 ************************************ 00:30:04.949 19:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:04.949 * Looking for test storage... 00:30:04.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.949 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.210 --rc genhtml_branch_coverage=1 00:30:05.210 --rc genhtml_function_coverage=1 00:30:05.210 --rc genhtml_legend=1 00:30:05.210 --rc geninfo_all_blocks=1 00:30:05.210 --rc geninfo_unexecuted_blocks=1 00:30:05.210 00:30:05.210 ' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.210 --rc genhtml_branch_coverage=1 00:30:05.210 --rc genhtml_function_coverage=1 00:30:05.210 --rc genhtml_legend=1 00:30:05.210 --rc geninfo_all_blocks=1 00:30:05.210 --rc geninfo_unexecuted_blocks=1 00:30:05.210 00:30:05.210 ' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.210 --rc genhtml_branch_coverage=1 00:30:05.210 --rc genhtml_function_coverage=1 00:30:05.210 --rc genhtml_legend=1 00:30:05.210 --rc geninfo_all_blocks=1 00:30:05.210 --rc geninfo_unexecuted_blocks=1 00:30:05.210 00:30:05.210 ' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:05.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.210 --rc genhtml_branch_coverage=1 00:30:05.210 --rc genhtml_function_coverage=1 00:30:05.210 --rc genhtml_legend=1 00:30:05.210 --rc geninfo_all_blocks=1 00:30:05.210 --rc geninfo_unexecuted_blocks=1 00:30:05.210 00:30:05.210 ' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:05.210 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.211 19:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:13.347 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:13.347 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.347 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:13.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:13.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:30:13.348 00:30:13.348 --- 10.0.0.2 ping statistics --- 00:30:13.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.348 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:13.348 00:30:13.348 --- 10.0.0.1 ping statistics --- 00:30:13.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.348 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3139228 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3139228 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3139228 ']' 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.348 19:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.348 [2024-11-26 19:20:29.779121] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:13.348 [2024-11-26 19:20:29.780254] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:30:13.348 [2024-11-26 19:20:29.780306] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.348 [2024-11-26 19:20:29.881703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.348 [2024-11-26 19:20:29.933702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.348 [2024-11-26 19:20:29.933747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.348 [2024-11-26 19:20:29.933756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.348 [2024-11-26 19:20:29.933763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.348 [2024-11-26 19:20:29.933769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.348 [2024-11-26 19:20:29.935801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.348 [2024-11-26 19:20:29.935963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.348 [2024-11-26 19:20:29.935964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.348 [2024-11-26 19:20:30.015224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:13.348 [2024-11-26 19:20:30.016175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:13.348 [2024-11-26 19:20:30.016747] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.348 [2024-11-26 19:20:30.016857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 [2024-11-26 19:20:30.652862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 Malloc0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 Delay0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.608 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.609 [2024-11-26 19:20:30.752857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.609 19:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:13.868 [2024-11-26 19:20:30.893896] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:15.780 Initializing NVMe Controllers 00:30:15.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:15.780 controller IO queue size 128 less than required 00:30:15.780 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:15.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:15.780 Initialization complete. Launching workers. 00:30:15.780 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28332 00:30:15.780 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28393, failed to submit 66 00:30:15.780 success 28332, unsuccessful 61, failed 0 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.780 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:16.041 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.041 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:16.041 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.041 19:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.041 rmmod nvme_tcp 00:30:16.041 rmmod nvme_fabrics 00:30:16.041 rmmod nvme_keyring 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3139228 ']' 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3139228 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3139228 ']' 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3139228 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139228 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139228' 00:30:16.041 killing process with pid 3139228 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3139228 00:30:16.041 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3139228 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.302 19:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.212 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.212 00:30:18.212 real 0m13.426s 00:30:18.212 user 0m10.960s 00:30:18.212 sys 0m7.018s 00:30:18.212 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.212 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:18.212 ************************************ 00:30:18.212 END TEST nvmf_abort 00:30:18.212 ************************************ 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:18.473 ************************************ 00:30:18.473 START TEST nvmf_ns_hotplug_stress 00:30:18.473 ************************************ 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:18.473 * Looking for test storage... 00:30:18.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.473 --rc genhtml_branch_coverage=1 00:30:18.473 --rc genhtml_function_coverage=1 00:30:18.473 --rc genhtml_legend=1 00:30:18.473 --rc geninfo_all_blocks=1 00:30:18.473 --rc geninfo_unexecuted_blocks=1 00:30:18.473 00:30:18.473 ' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.473 --rc genhtml_branch_coverage=1 00:30:18.473 --rc genhtml_function_coverage=1 00:30:18.473 --rc genhtml_legend=1 00:30:18.473 --rc geninfo_all_blocks=1 00:30:18.473 --rc geninfo_unexecuted_blocks=1 00:30:18.473 00:30:18.473 ' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.473 --rc genhtml_branch_coverage=1 00:30:18.473 --rc genhtml_function_coverage=1 00:30:18.473 --rc genhtml_legend=1 00:30:18.473 --rc geninfo_all_blocks=1 00:30:18.473 --rc geninfo_unexecuted_blocks=1 00:30:18.473 00:30:18.473 ' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.473 --rc genhtml_branch_coverage=1 00:30:18.473 --rc genhtml_function_coverage=1 00:30:18.473 --rc genhtml_legend=1 00:30:18.473 --rc geninfo_all_blocks=1 00:30:18.473 --rc geninfo_unexecuted_blocks=1 00:30:18.473 00:30:18.473 ' 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.473 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.734 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.735 19:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.871 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:26.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:26.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:26.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:26.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.872 19:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.872 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:30:26.872 00:30:26.872 --- 10.0.0.2 ping statistics --- 00:30:26.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.872 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:30:26.873 00:30:26.873 --- 10.0.0.1 ping statistics --- 00:30:26.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.873 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3144018 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3144018 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3144018 ']' 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.873 19:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:26.873 [2024-11-26 19:20:43.286304] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.873 [2024-11-26 19:20:43.287410] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:30:26.873 [2024-11-26 19:20:43.287457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.873 [2024-11-26 19:20:43.387957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.873 [2024-11-26 19:20:43.439435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.873 [2024-11-26 19:20:43.439494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.873 [2024-11-26 19:20:43.439508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.873 [2024-11-26 19:20:43.439516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.873 [2024-11-26 19:20:43.439522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.873 [2024-11-26 19:20:43.441389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.873 [2024-11-26 19:20:43.441550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.873 [2024-11-26 19:20:43.441550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.873 [2024-11-26 19:20:43.521052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.873 [2024-11-26 19:20:43.522131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:26.873 [2024-11-26 19:20:43.522592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:26.873 [2024-11-26 19:20:43.522656] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:27.133 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:27.133 [2024-11-26 19:20:44.326504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.394 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:27.394 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.653 [2024-11-26 19:20:44.711398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.653 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.912 19:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:27.912 Malloc0 00:30:28.171 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:28.171 Delay0 00:30:28.171 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.430 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:28.690 NULL1 00:30:28.690 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:28.690 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3144463 00:30:28.690 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:28.690 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:28.690 19:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.949 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.209 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:29.209 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:29.469 true 00:30:29.469 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:29.469 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.469 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.730 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:29.730 19:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:29.991 true 00:30:29.991 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:29.991 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.252 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.513 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:30.513 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:30.513 true 00:30:30.513 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:30.513 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.773 19:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.033 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:31.033 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:31.033 true 00:30:31.293 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:31.293 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.293 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.552 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:31.552 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:31.812 true 00:30:31.812 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:31.812 19:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.812 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.072 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:32.072 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:32.332 true 00:30:32.332 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:32.332 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.592 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.592 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:32.592 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:32.852 true 00:30:32.852 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:32.852 19:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.113 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.113 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:33.113 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:33.373 true 00:30:33.373 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:33.373 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.633 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.894 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:33.894 19:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:33.894 true 00:30:33.894 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:33.894 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.154 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.415 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:34.415 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:34.415 true 00:30:34.674 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:34.674 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.674 19:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.935 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:34.935 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:35.194 true 00:30:35.194 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:35.194 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.453 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.453 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:35.453 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:35.713 true 00:30:35.713 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:35.713 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.973 19:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.973 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:35.973 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:36.233 true 00:30:36.233 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:36.233 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.493 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.754 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:36.754 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:36.754 true 00:30:36.754 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:36.754 19:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.014 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.274 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:37.274 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:37.274 true 00:30:37.274 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:37.274 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.534 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.795 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:37.795 19:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:37.795 true 00:30:37.795 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:37.795 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.055 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.315 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:38.315 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:38.315 true 00:30:38.575 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:38.575 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.575 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.835 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:38.835 19:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:39.096 true 00:30:39.096 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:39.096 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.096 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.357 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:39.357 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:39.617 true 00:30:39.617 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:39.617 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.877 19:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.877 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:39.877 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:40.137 true 00:30:40.137 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:40.137 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.397 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.397 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:40.397 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:40.657 true 00:30:40.657 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:40.657 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.917 19:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.178 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:41.178 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:41.178 true 00:30:41.178 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:41.178 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.438 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.698 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:41.698 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:41.698 true 00:30:41.698 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:41.698 19:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.959 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.219 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:42.219 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:42.219 true 00:30:42.219 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:42.219 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.480 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.741 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:42.741 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:42.741 true 00:30:43.001 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:43.001 19:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.001 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.262 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:43.262 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:43.523 true 00:30:43.523 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:43.523 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.523 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.785 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:43.785 19:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:44.045 true 00:30:44.045 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:44.045 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.045 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.306 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:44.306 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:44.566 true 00:30:44.566 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:44.566 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.827 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.827 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:44.827 19:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:45.089 true 00:30:45.089 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:45.089 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.349 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.349 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:45.349 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:45.609 true 00:30:45.609 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:45.609 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.869 19:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.129 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:46.129 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:46.129 true 00:30:46.129 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:46.129 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.390 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.651 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:46.651 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:46.651 true 00:30:46.651 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:46.651 19:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.912 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.173 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:47.173 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:47.173 true 00:30:47.433 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:47.433 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.433 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.694 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:47.694 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:47.954 true 00:30:47.954 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:47.954 19:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.954 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.214 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:48.214 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:48.475 true 00:30:48.475 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:48.475 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.475 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.735 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:48.735 19:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:48.995 true 00:30:48.995 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:48.995 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.255 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.255 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:49.255 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:49.515 true 00:30:49.515 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:49.515 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.776 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.776 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:49.776 19:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:50.036 true 00:30:50.036 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:50.036 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.340 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.340 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:50.340 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:50.697 true 00:30:50.697 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:50.697 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.961 19:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.961 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:50.961 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:51.222 true 00:30:51.222 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:51.222 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.483 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.483 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:51.483 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:51.744 true 00:30:51.744 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:51.744 19:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.005 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.005 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:52.005 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:52.265 true 00:30:52.265 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:52.265 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.526 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.787 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:52.787 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:52.787 true 00:30:52.787 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:52.787 19:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.047 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.311 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:53.311 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:53.311 true 00:30:53.311 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:53.311 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.572 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.833 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:53.833 19:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:53.833 true 00:30:53.833 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:53.833 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.095 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.356 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:54.356 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:54.356 true 00:30:54.617 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:54.617 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.617 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.876 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:54.876 19:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:55.137 true 00:30:55.137 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:55.137 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.137 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.398 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:55.398 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:55.659 true 00:30:55.659 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:55.659 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.659 19:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.920 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:55.920 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:56.181 true 00:30:56.181 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:56.181 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.442 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.442 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:56.442 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:56.703 true 00:30:56.703 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:56.703 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.963 19:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.963 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:56.963 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:57.224 true 00:30:57.224 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:57.224 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.484 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.744 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:57.744 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:57.744 true 00:30:57.744 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:57.744 19:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.003 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.264 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:58.264 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:58.264 true 00:30:58.264 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:58.264 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.524 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.784 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:58.784 19:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:59.045 true 00:30:59.045 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:59.045 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.045 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.045 Initializing NVMe Controllers 00:30:59.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:59.045 Controller IO queue size 128, less than required. 00:30:59.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:59.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:59.045 Initialization complete. Launching workers. 00:30:59.045 ======================================================== 00:30:59.045 Latency(us) 00:30:59.045 Device Information : IOPS MiB/s Average min max 00:30:59.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29996.87 14.65 4267.09 1103.52 43466.44 00:30:59.045 ======================================================== 00:30:59.045 Total : 29996.87 14.65 4267.09 1103.52 43466.44 00:30:59.045 00:30:59.305 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:59.305 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:59.566 true 00:30:59.566 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3144463 00:30:59.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3144463) - No such process 00:30:59.566 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3144463 00:30:59.566 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.566 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:59.826 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:59.826 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:59.826 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:59.826 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:59.826 19:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:00.085 null0 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:00.085 null1 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.085 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:00.345 null2 00:31:00.345 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.345 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.345 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:00.605 null3 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:00.605 null4 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.605 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:00.866 null5 00:31:00.866 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:00.866 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.866 19:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:01.128 null6 00:31:01.128 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.128 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.128 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:01.128 null7 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3151461 3151462 3151464 3151466 3151468 3151470 3151472 3151474 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.390 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.651 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.652 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:01.912 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.912 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:01.912 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:01.912 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:01.912 19:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:01.912 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.173 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.173 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.173 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.173 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.174 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.435 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.695 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.696 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.956 19:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.956 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.216 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.477 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.478 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.739 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.001 19:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.001 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.264 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.525 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.787 19:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.048 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.308 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.308 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.308 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.308 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.309 rmmod nvme_tcp 00:31:05.309 rmmod nvme_fabrics 00:31:05.309 rmmod nvme_keyring 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3144018 ']' 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3144018 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3144018 ']' 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3144018 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.309 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144018 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144018' 00:31:05.569 killing process with pid 3144018 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3144018 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3144018 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.569 19:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.113 00:31:08.113 real 0m49.279s 00:31:08.113 user 3m4.650s 00:31:08.113 sys 0m22.513s 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:08.113 ************************************ 00:31:08.113 END TEST nvmf_ns_hotplug_stress 00:31:08.113 ************************************ 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.113 ************************************ 00:31:08.113 START TEST nvmf_delete_subsystem 00:31:08.113 ************************************ 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.113 * Looking for test storage... 00:31:08.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.113 19:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.113 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.113 --rc genhtml_branch_coverage=1 00:31:08.113 --rc genhtml_function_coverage=1 00:31:08.113 --rc genhtml_legend=1 00:31:08.113 --rc geninfo_all_blocks=1 00:31:08.113 --rc geninfo_unexecuted_blocks=1 00:31:08.113 00:31:08.113 ' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.114 19:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.253 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:16.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:16.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:16.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:16.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:31:16.254 00:31:16.254 --- 10.0.0.2 ping statistics --- 00:31:16.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.254 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:31:16.254 00:31:16.254 --- 10.0.0.1 ping statistics --- 00:31:16.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.254 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.254 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3156551 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3156551 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3156551 ']' 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.255 19:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.255 [2024-11-26 19:21:32.672477] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.255 [2024-11-26 19:21:32.673618] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:31:16.255 [2024-11-26 19:21:32.673673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.255 [2024-11-26 19:21:32.774694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:16.255 [2024-11-26 19:21:32.825004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.255 [2024-11-26 19:21:32.825056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.255 [2024-11-26 19:21:32.825064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.255 [2024-11-26 19:21:32.825072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.255 [2024-11-26 19:21:32.825078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.255 [2024-11-26 19:21:32.826839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.255 [2024-11-26 19:21:32.826843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.255 [2024-11-26 19:21:32.905401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.255 [2024-11-26 19:21:32.906211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.255 [2024-11-26 19:21:32.906429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 [2024-11-26 19:21:33.523865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 [2024-11-26 19:21:33.556388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 NULL1 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 Delay0 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3156653 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:16.516 19:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:16.516 [2024-11-26 19:21:33.680553] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:18.433 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.433 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.433 19:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 [2024-11-26 19:21:35.729651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfb680 is same with the state(6) to be set 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 Write completed with error (sct=0, sc=8) 00:31:18.694 starting I/O failed: -6 00:31:18.694 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 starting I/O failed: -6 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 [2024-11-26 19:21:35.733537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38e400d490 is same with the state(6) to be set 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Write completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:18.695 Read completed with error (sct=0, sc=8) 00:31:19.642 [2024-11-26 19:21:36.697868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc9b0 is same with the state(6) to be set 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 [2024-11-26 19:21:36.733935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfb4a0 is same with the state(6) to be set 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 [2024-11-26 19:21:36.734027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfb860 is same with the state(6) to be set 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Write completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 [2024-11-26 19:21:36.735868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38e400d7c0 is same with the state(6) to be set 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.642 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Read completed with error (sct=0, sc=8) 00:31:19.643 Write completed with error (sct=0, sc=8) 00:31:19.643 [2024-11-26 19:21:36.736026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f38e400d020 is same with the state(6) to be set 00:31:19.643 Initializing NVMe Controllers 00:31:19.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.643 Controller IO queue size 128, less than required. 00:31:19.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:19.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:19.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:19.643 Initialization complete. Launching workers. 00:31:19.643 ======================================================== 00:31:19.643 Latency(us) 00:31:19.643 Device Information : IOPS MiB/s Average min max 00:31:19.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.17 0.08 897424.15 361.69 1007789.52 00:31:19.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.19 0.08 910517.02 316.63 1011403.16 00:31:19.643 ======================================================== 00:31:19.643 Total : 331.36 0.16 903872.29 316.63 1011403.16 00:31:19.643 00:31:19.643 [2024-11-26 19:21:36.736673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfc9b0 (9): Bad file descriptor 00:31:19.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:19.643 19:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.643 19:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:19.643 19:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3156653 00:31:19.643 19:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3156653 00:31:20.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3156653) - No such process 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3156653 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3156653 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3156653 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.309 [2024-11-26 19:21:37.272206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3157325 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:20.309 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:20.309 [2024-11-26 19:21:37.371937] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:20.880 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:20.880 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:20.880 19:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.141 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.141 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:21.141 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.711 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.712 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:21.712 19:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.283 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.283 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:22.283 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.854 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.854 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:22.854 19:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.115 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.115 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:23.115 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.375 Initializing NVMe Controllers 00:31:23.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.375 Controller IO queue size 128, less than required. 00:31:23.375 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:23.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:23.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:23.375 Initialization complete. Launching workers. 00:31:23.375 ======================================================== 00:31:23.375 Latency(us) 00:31:23.375 Device Information : IOPS MiB/s Average min max 00:31:23.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002418.91 1000288.74 1008386.82 00:31:23.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004130.47 1000384.48 1010922.89 00:31:23.375 ======================================================== 00:31:23.375 Total : 256.00 0.12 1003274.69 1000288.74 1010922.89 00:31:23.375 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3157325 00:31:23.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3157325) - No such process 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3157325 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:23.635 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:23.636 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.636 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:23.636 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.636 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.636 rmmod nvme_tcp 00:31:23.897 rmmod nvme_fabrics 00:31:23.897 rmmod nvme_keyring 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3156551 ']' 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3156551 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3156551 ']' 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3156551 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156551 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156551' 00:31:23.897 killing process with pid 3156551 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3156551 00:31:23.897 19:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3156551 00:31:23.897 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.897 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.898 19:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.445 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.446 00:31:26.446 real 0m18.317s 00:31:26.446 user 0m26.264s 00:31:26.446 sys 0m7.544s 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.446 ************************************ 00:31:26.446 END TEST nvmf_delete_subsystem 00:31:26.446 ************************************ 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.446 ************************************ 00:31:26.446 START TEST nvmf_host_management 00:31:26.446 ************************************ 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.446 * Looking for test storage... 00:31:26.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.446 --rc genhtml_branch_coverage=1 00:31:26.446 --rc genhtml_function_coverage=1 00:31:26.446 --rc genhtml_legend=1 00:31:26.446 --rc geninfo_all_blocks=1 00:31:26.446 --rc geninfo_unexecuted_blocks=1 00:31:26.446 00:31:26.446 ' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.446 --rc genhtml_branch_coverage=1 00:31:26.446 --rc genhtml_function_coverage=1 00:31:26.446 --rc genhtml_legend=1 00:31:26.446 --rc geninfo_all_blocks=1 00:31:26.446 --rc geninfo_unexecuted_blocks=1 00:31:26.446 00:31:26.446 ' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.446 --rc genhtml_branch_coverage=1 00:31:26.446 --rc genhtml_function_coverage=1 00:31:26.446 --rc genhtml_legend=1 00:31:26.446 --rc geninfo_all_blocks=1 00:31:26.446 --rc geninfo_unexecuted_blocks=1 00:31:26.446 00:31:26.446 ' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:26.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.446 --rc genhtml_branch_coverage=1 00:31:26.446 --rc genhtml_function_coverage=1 00:31:26.446 --rc genhtml_legend=1 00:31:26.446 --rc geninfo_all_blocks=1 00:31:26.446 --rc geninfo_unexecuted_blocks=1 00:31:26.446 00:31:26.446 ' 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.446 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.447 19:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.584 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.584 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:34.584 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:34.585 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:34.585 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:34.585 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:34.585 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.585 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:34.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:31:34.586 00:31:34.586 --- 10.0.0.2 ping statistics --- 00:31:34.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.586 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:31:34.586 00:31:34.586 --- 10.0.0.1 ping statistics --- 00:31:34.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.586 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:34.586 19:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3162331 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3162331 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3162331 ']' 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.586 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.586 [2024-11-26 19:21:51.094136] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:34.586 [2024-11-26 19:21:51.095292] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:31:34.586 [2024-11-26 19:21:51.095342] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:34.586 [2024-11-26 19:21:51.195967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:34.586 [2024-11-26 19:21:51.249192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:34.586 [2024-11-26 19:21:51.249247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:34.586 [2024-11-26 19:21:51.249256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:34.586 [2024-11-26 19:21:51.249267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:34.586 [2024-11-26 19:21:51.249274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:34.586 [2024-11-26 19:21:51.251263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.586 [2024-11-26 19:21:51.251538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.586 [2024-11-26 19:21:51.251698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:34.586 [2024-11-26 19:21:51.251701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.586 [2024-11-26 19:21:51.331485] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:34.586 [2024-11-26 19:21:51.332428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:34.586 [2024-11-26 19:21:51.332729] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:34.586 [2024-11-26 19:21:51.333292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:34.586 [2024-11-26 19:21:51.333332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.847 19:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.847 [2024-11-26 19:21:51.976747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.847 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:34.847 Malloc0 00:31:35.110 [2024-11-26 19:21:52.076938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3162479 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3162479 /var/tmp/bdevperf.sock 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3162479 ']' 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.110 { 00:31:35.110 "params": { 00:31:35.110 "name": "Nvme$subsystem", 00:31:35.110 "trtype": "$TEST_TRANSPORT", 00:31:35.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.110 "adrfam": "ipv4", 00:31:35.110 "trsvcid": "$NVMF_PORT", 00:31:35.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.110 "hdgst": ${hdgst:-false}, 00:31:35.110 "ddgst": ${ddgst:-false} 00:31:35.110 }, 00:31:35.110 "method": "bdev_nvme_attach_controller" 00:31:35.110 } 00:31:35.110 EOF 00:31:35.110 )") 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:35.110 19:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:35.110 "params": { 00:31:35.110 "name": "Nvme0", 00:31:35.110 "trtype": "tcp", 00:31:35.110 "traddr": "10.0.0.2", 00:31:35.110 "adrfam": "ipv4", 00:31:35.110 "trsvcid": "4420", 00:31:35.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.110 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.110 "hdgst": false, 00:31:35.110 "ddgst": false 00:31:35.110 }, 00:31:35.110 "method": "bdev_nvme_attach_controller" 00:31:35.110 }' 00:31:35.110 [2024-11-26 19:21:52.187841] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:31:35.110 [2024-11-26 19:21:52.187910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162479 ] 00:31:35.110 [2024-11-26 19:21:52.280994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.371 [2024-11-26 19:21:52.334390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.632 Running I/O for 10 seconds... 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.894 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.895 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.159 [2024-11-26 19:21:53.105740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.105999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bef20 is same with the state(6) to be set 00:31:36.159 [2024-11-26 19:21:53.106468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.159 [2024-11-26 19:21:53.106525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.159 [2024-11-26 19:21:53.106549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.159 [2024-11-26 19:21:53.106558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.159 [2024-11-26 19:21:53.106569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.159 [2024-11-26 19:21:53.106577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.106984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.106992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.160 [2024-11-26 19:21:53.107294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.160 [2024-11-26 19:21:53.107301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.161 [2024-11-26 19:21:53.107672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4ee0 is same with the state(6) to be set 00:31:36.161 [2024-11-26 19:21:53.107809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.161 [2024-11-26 19:21:53.107821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.161 [2024-11-26 19:21:53.107837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.161 [2024-11-26 19:21:53.107854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:36.161 [2024-11-26 19:21:53.107870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.107877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecc010 is same with the state(6) to be set 00:31:36.161 [2024-11-26 19:21:53.109126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.161 task offset: 73728 on job bdev=Nvme0n1 fails 00:31:36.161 00:31:36.161 Latency(us) 00:31:36.161 [2024-11-26T18:21:53.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.161 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:36.161 Job: Nvme0n1 ended in about 0.44 seconds with error 00:31:36.161 Verification LBA range: start 0x0 length 0x400 00:31:36.161 Nvme0n1 : 0.44 1309.89 81.87 145.54 0.00 42688.06 4724.05 38666.24 00:31:36.161 [2024-11-26T18:21:53.374Z] =================================================================================================================== 00:31:36.161 [2024-11-26T18:21:53.374Z] Total : 1309.89 81.87 145.54 0.00 42688.06 4724.05 38666.24 00:31:36.161 [2024-11-26 19:21:53.111354] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:36.161 [2024-11-26 19:21:53.111394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecc010 (9): Bad file descriptor 00:31:36.161 [2024-11-26 19:21:53.113017] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:36.161 [2024-11-26 19:21:53.113122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:36.161 [2024-11-26 19:21:53.113150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.161 [2024-11-26 19:21:53.113180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:36.161 [2024-11-26 19:21:53.113190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:36.161 [2024-11-26 19:21:53.113198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.161 [2024-11-26 19:21:53.113206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1ecc010 00:31:36.161 [2024-11-26 19:21:53.113228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecc010 (9): Bad file descriptor 00:31:36.161 [2024-11-26 19:21:53.113242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:36.161 [2024-11-26 19:21:53.113250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:36.161 [2024-11-26 19:21:53.113261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:36.161 [2024-11-26 19:21:53.113271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.161 19:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3162479 00:31:37.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3162479) - No such process 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.105 { 00:31:37.105 "params": { 00:31:37.105 "name": "Nvme$subsystem", 00:31:37.105 "trtype": "$TEST_TRANSPORT", 00:31:37.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.105 "adrfam": "ipv4", 00:31:37.105 "trsvcid": "$NVMF_PORT", 00:31:37.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.105 "hdgst": ${hdgst:-false}, 00:31:37.105 "ddgst": ${ddgst:-false} 00:31:37.105 }, 00:31:37.105 "method": "bdev_nvme_attach_controller" 00:31:37.105 } 00:31:37.105 EOF 00:31:37.105 )") 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:37.105 19:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.105 "params": { 00:31:37.105 "name": "Nvme0", 00:31:37.105 "trtype": "tcp", 00:31:37.105 "traddr": "10.0.0.2", 00:31:37.105 "adrfam": "ipv4", 00:31:37.105 "trsvcid": "4420", 00:31:37.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.105 "hdgst": false, 00:31:37.105 "ddgst": false 00:31:37.105 }, 00:31:37.105 "method": "bdev_nvme_attach_controller" 00:31:37.105 }' 00:31:37.105 [2024-11-26 19:21:54.184802] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:31:37.105 [2024-11-26 19:21:54.184870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162933 ] 00:31:37.105 [2024-11-26 19:21:54.278101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.365 [2024-11-26 19:21:54.332337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.624 Running I/O for 1 seconds... 00:31:38.563 1664.00 IOPS, 104.00 MiB/s 00:31:38.563 Latency(us) 00:31:38.563 [2024-11-26T18:21:55.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.563 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:38.563 Verification LBA range: start 0x0 length 0x400 00:31:38.563 Nvme0n1 : 1.03 1683.15 105.20 0.00 0.00 37298.16 7864.32 31894.19 00:31:38.563 [2024-11-26T18:21:55.776Z] =================================================================================================================== 00:31:38.563 [2024-11-26T18:21:55.776Z] Total : 1683.15 105.20 0.00 0.00 37298.16 7864.32 31894.19 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.823 rmmod nvme_tcp 00:31:38.823 rmmod nvme_fabrics 00:31:38.823 rmmod nvme_keyring 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:38.823 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3162331 ']' 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3162331 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3162331 ']' 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3162331 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162331 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162331' 00:31:38.824 killing process with pid 3162331 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3162331 00:31:38.824 19:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3162331 00:31:39.085 [2024-11-26 19:21:56.055134] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.085 19:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:40.997 00:31:40.997 real 0m14.931s 00:31:40.997 user 0m20.133s 00:31:40.997 sys 0m7.562s 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:40.997 ************************************ 00:31:40.997 END TEST nvmf_host_management 00:31:40.997 ************************************ 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.997 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:41.258 ************************************ 00:31:41.258 START TEST nvmf_lvol 00:31:41.258 ************************************ 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:41.258 * Looking for test storage... 00:31:41.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.258 --rc genhtml_branch_coverage=1 00:31:41.258 --rc genhtml_function_coverage=1 00:31:41.258 --rc genhtml_legend=1 00:31:41.258 --rc geninfo_all_blocks=1 00:31:41.258 --rc geninfo_unexecuted_blocks=1 00:31:41.258 00:31:41.258 ' 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.258 --rc genhtml_branch_coverage=1 00:31:41.258 --rc genhtml_function_coverage=1 00:31:41.258 --rc genhtml_legend=1 00:31:41.258 --rc geninfo_all_blocks=1 00:31:41.258 --rc geninfo_unexecuted_blocks=1 00:31:41.258 00:31:41.258 ' 00:31:41.258 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.258 --rc genhtml_branch_coverage=1 00:31:41.258 --rc genhtml_function_coverage=1 00:31:41.258 --rc genhtml_legend=1 00:31:41.259 --rc geninfo_all_blocks=1 00:31:41.259 --rc geninfo_unexecuted_blocks=1 00:31:41.259 00:31:41.259 ' 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.259 --rc genhtml_branch_coverage=1 00:31:41.259 --rc genhtml_function_coverage=1 00:31:41.259 --rc genhtml_legend=1 00:31:41.259 --rc geninfo_all_blocks=1 00:31:41.259 --rc geninfo_unexecuted_blocks=1 00:31:41.259 00:31:41.259 ' 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.259 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.519 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.519 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.519 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.519 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.520 19:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:49.661 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:49.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:49.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:49.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.661 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:31:49.662 00:31:49.662 --- 10.0.0.2 ping statistics --- 00:31:49.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.662 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:31:49.662 00:31:49.662 --- 10.0.0.1 ping statistics --- 00:31:49.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.662 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:49.662 19:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3167628 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3167628 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3167628 ']' 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.662 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.662 [2024-11-26 19:22:06.067750] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.662 [2024-11-26 19:22:06.068906] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:31:49.662 [2024-11-26 19:22:06.068956] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.662 [2024-11-26 19:22:06.167645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.662 [2024-11-26 19:22:06.219396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.662 [2024-11-26 19:22:06.219452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.662 [2024-11-26 19:22:06.219461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.662 [2024-11-26 19:22:06.219468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.662 [2024-11-26 19:22:06.219474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.662 [2024-11-26 19:22:06.221229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.662 [2024-11-26 19:22:06.221318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.662 [2024-11-26 19:22:06.221319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.662 [2024-11-26 19:22:06.299577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.662 [2024-11-26 19:22:06.300521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:49.662 [2024-11-26 19:22:06.301021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.662 [2024-11-26 19:22:06.301145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.923 19:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:49.923 [2024-11-26 19:22:07.110631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.184 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.184 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:50.184 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.444 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:50.444 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:50.704 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:50.964 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6d3897cd-6145-46f5-b4bf-88758c183495 00:31:50.964 19:22:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d3897cd-6145-46f5-b4bf-88758c183495 lvol 20 00:31:50.964 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6956a542-dceb-4b0c-86d1-b9b41a30f189 00:31:50.964 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:51.224 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6956a542-dceb-4b0c-86d1-b9b41a30f189 00:31:51.483 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:51.742 [2024-11-26 19:22:08.694612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.742 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:51.742 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3168214 00:31:51.742 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:51.742 19:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:52.681 19:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6956a542-dceb-4b0c-86d1-b9b41a30f189 MY_SNAPSHOT 00:31:52.941 19:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=02f67474-70a9-4561-837f-0cbc404cdee8 00:31:52.941 19:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6956a542-dceb-4b0c-86d1-b9b41a30f189 30 00:31:53.202 19:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 02f67474-70a9-4561-837f-0cbc404cdee8 MY_CLONE 00:31:53.463 19:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=37c23acb-2e09-4bda-8c84-a96f1cfb64ba 00:31:53.463 19:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 37c23acb-2e09-4bda-8c84-a96f1cfb64ba 00:31:54.035 19:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3168214 00:32:02.172 Initializing NVMe Controllers 00:32:02.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:02.172 Controller IO queue size 128, less than required. 00:32:02.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:02.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:02.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:02.172 Initialization complete. Launching workers. 00:32:02.172 ======================================================== 00:32:02.173 Latency(us) 00:32:02.173 Device Information : IOPS MiB/s Average min max 00:32:02.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15585.10 60.88 8215.41 1468.38 82621.40 00:32:02.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15277.10 59.68 8380.33 4248.14 92739.64 00:32:02.173 ======================================================== 00:32:02.173 Total : 30862.20 120.56 8297.04 1468.38 92739.64 00:32:02.173 00:32:02.173 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:02.433 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6956a542-dceb-4b0c-86d1-b9b41a30f189 00:32:02.433 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d3897cd-6145-46f5-b4bf-88758c183495 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.693 rmmod nvme_tcp 00:32:02.693 rmmod nvme_fabrics 00:32:02.693 rmmod nvme_keyring 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3167628 ']' 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3167628 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3167628 ']' 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3167628 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.693 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167628 00:32:02.954 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.954 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.954 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167628' 00:32:02.954 killing process with pid 3167628 00:32:02.954 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3167628 00:32:02.954 19:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3167628 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.954 19:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.499 00:32:05.499 real 0m23.894s 00:32:05.499 user 0m55.817s 00:32:05.499 sys 0m10.854s 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:05.499 ************************************ 00:32:05.499 END TEST nvmf_lvol 00:32:05.499 ************************************ 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.499 ************************************ 00:32:05.499 START TEST nvmf_lvs_grow 00:32:05.499 ************************************ 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.499 * Looking for test storage... 00:32:05.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.499 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.500 --rc genhtml_branch_coverage=1 00:32:05.500 --rc genhtml_function_coverage=1 00:32:05.500 --rc genhtml_legend=1 00:32:05.500 --rc geninfo_all_blocks=1 00:32:05.500 --rc geninfo_unexecuted_blocks=1 00:32:05.500 00:32:05.500 ' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.500 --rc genhtml_branch_coverage=1 00:32:05.500 --rc genhtml_function_coverage=1 00:32:05.500 --rc genhtml_legend=1 00:32:05.500 --rc geninfo_all_blocks=1 00:32:05.500 --rc geninfo_unexecuted_blocks=1 00:32:05.500 00:32:05.500 ' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.500 --rc genhtml_branch_coverage=1 00:32:05.500 --rc genhtml_function_coverage=1 00:32:05.500 --rc genhtml_legend=1 00:32:05.500 --rc geninfo_all_blocks=1 00:32:05.500 --rc geninfo_unexecuted_blocks=1 00:32:05.500 00:32:05.500 ' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.500 --rc genhtml_branch_coverage=1 00:32:05.500 --rc genhtml_function_coverage=1 00:32:05.500 --rc genhtml_legend=1 00:32:05.500 --rc geninfo_all_blocks=1 00:32:05.500 --rc geninfo_unexecuted_blocks=1 00:32:05.500 00:32:05.500 ' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.500 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.501 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.501 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.501 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.501 19:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:13.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:13.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:13.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.760 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:13.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:13.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:32:13.761 00:32:13.761 --- 10.0.0.2 ping statistics --- 00:32:13.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.761 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:32:13.761 00:32:13.761 --- 10.0.0.1 ping statistics --- 00:32:13.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.761 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3174414 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3174414 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3174414 ']' 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.761 19:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.761 [2024-11-26 19:22:30.042288] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.761 [2024-11-26 19:22:30.043446] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:13.761 [2024-11-26 19:22:30.043498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.761 [2024-11-26 19:22:30.146015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.761 [2024-11-26 19:22:30.200105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.761 [2024-11-26 19:22:30.200168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.761 [2024-11-26 19:22:30.200176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.761 [2024-11-26 19:22:30.200184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.761 [2024-11-26 19:22:30.200190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.761 [2024-11-26 19:22:30.200963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.761 [2024-11-26 19:22:30.278610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.761 [2024-11-26 19:22:30.278897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.761 19:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:14.036 [2024-11-26 19:22:31.065849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.036 ************************************ 00:32:14.036 START TEST lvs_grow_clean 00:32:14.036 ************************************ 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.036 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:14.297 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:14.297 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:14.559 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af34fb36-3203-4c10-8f42-a75bdf7360d5 lvol 150 00:32:14.821 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb001c7d-c78e-42aa-9ba3-c107f4e5daaf 00:32:14.821 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.821 19:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:15.083 [2024-11-26 19:22:32.053526] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:15.083 [2024-11-26 19:22:32.053695] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:15.083 true 00:32:15.083 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:15.083 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:15.083 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:15.083 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:15.343 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb001c7d-c78e-42aa-9ba3-c107f4e5daaf 00:32:15.604 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.604 [2024-11-26 19:22:32.778216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.604 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3175181 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3175181 /var/tmp/bdevperf.sock 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3175181 ']' 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:15.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.865 19:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:15.865 [2024-11-26 19:22:33.016450] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:15.865 [2024-11-26 19:22:33.016522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175181 ] 00:32:16.125 [2024-11-26 19:22:33.107958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.125 [2024-11-26 19:22:33.159774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.697 19:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.697 19:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:16.697 19:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:16.959 Nvme0n1 00:32:16.959 19:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:17.220 [ 00:32:17.220 { 00:32:17.220 "name": "Nvme0n1", 00:32:17.220 "aliases": [ 00:32:17.220 "cb001c7d-c78e-42aa-9ba3-c107f4e5daaf" 00:32:17.220 ], 00:32:17.220 "product_name": "NVMe disk", 00:32:17.220 "block_size": 4096, 00:32:17.220 "num_blocks": 38912, 00:32:17.220 "uuid": "cb001c7d-c78e-42aa-9ba3-c107f4e5daaf", 00:32:17.220 "numa_id": 0, 00:32:17.220 "assigned_rate_limits": { 00:32:17.220 "rw_ios_per_sec": 0, 00:32:17.220 "rw_mbytes_per_sec": 0, 00:32:17.220 "r_mbytes_per_sec": 0, 00:32:17.220 "w_mbytes_per_sec": 0 00:32:17.220 }, 00:32:17.220 "claimed": false, 00:32:17.220 "zoned": false, 00:32:17.220 "supported_io_types": { 00:32:17.220 "read": true, 00:32:17.220 "write": true, 00:32:17.220 "unmap": true, 00:32:17.220 "flush": true, 00:32:17.220 "reset": true, 00:32:17.220 "nvme_admin": true, 00:32:17.220 "nvme_io": true, 00:32:17.220 "nvme_io_md": false, 00:32:17.220 "write_zeroes": true, 00:32:17.220 "zcopy": false, 00:32:17.220 "get_zone_info": false, 00:32:17.220 "zone_management": false, 00:32:17.220 "zone_append": false, 00:32:17.220 "compare": true, 00:32:17.220 "compare_and_write": true, 00:32:17.220 "abort": true, 00:32:17.220 "seek_hole": false, 00:32:17.220 "seek_data": false, 00:32:17.220 "copy": true, 00:32:17.220 "nvme_iov_md": false 00:32:17.220 }, 00:32:17.220 "memory_domains": [ 00:32:17.220 { 00:32:17.220 "dma_device_id": "system", 00:32:17.220 "dma_device_type": 1 00:32:17.220 } 00:32:17.220 ], 00:32:17.220 "driver_specific": { 00:32:17.220 "nvme": [ 00:32:17.220 { 00:32:17.220 "trid": { 00:32:17.220 "trtype": "TCP", 00:32:17.220 "adrfam": "IPv4", 00:32:17.220 "traddr": "10.0.0.2", 00:32:17.220 "trsvcid": "4420", 00:32:17.220 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:17.220 }, 00:32:17.220 "ctrlr_data": { 00:32:17.220 "cntlid": 1, 00:32:17.220 "vendor_id": "0x8086", 00:32:17.220 "model_number": "SPDK bdev Controller", 00:32:17.220 "serial_number": "SPDK0", 00:32:17.220 "firmware_revision": "25.01", 00:32:17.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.220 "oacs": { 00:32:17.221 "security": 0, 00:32:17.221 "format": 0, 00:32:17.221 "firmware": 0, 00:32:17.221 "ns_manage": 0 00:32:17.221 }, 00:32:17.221 "multi_ctrlr": true, 00:32:17.221 "ana_reporting": false 00:32:17.221 }, 00:32:17.221 "vs": { 00:32:17.221 "nvme_version": "1.3" 00:32:17.221 }, 00:32:17.221 "ns_data": { 00:32:17.221 "id": 1, 00:32:17.221 "can_share": true 00:32:17.221 } 00:32:17.221 } 00:32:17.221 ], 00:32:17.221 "mp_policy": "active_passive" 00:32:17.221 } 00:32:17.221 } 00:32:17.221 ] 00:32:17.221 19:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3175410 00:32:17.221 19:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:17.221 19:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:17.221 Running I/O for 10 seconds... 00:32:18.608 Latency(us) 00:32:18.608 [2024-11-26T18:22:35.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.608 Nvme0n1 : 1.00 16647.00 65.03 0.00 0.00 0.00 0.00 0.00 00:32:18.608 [2024-11-26T18:22:35.821Z] =================================================================================================================== 00:32:18.608 [2024-11-26T18:22:35.821Z] Total : 16647.00 65.03 0.00 0.00 0.00 0.00 0.00 00:32:18.608 00:32:19.180 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:19.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.441 Nvme0n1 : 2.00 16832.50 65.75 0.00 0.00 0.00 0.00 0.00 00:32:19.441 [2024-11-26T18:22:36.654Z] =================================================================================================================== 00:32:19.441 [2024-11-26T18:22:36.654Z] Total : 16832.50 65.75 0.00 0.00 0.00 0.00 0.00 00:32:19.441 00:32:19.441 true 00:32:19.441 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:19.441 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:19.701 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:19.702 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:19.702 19:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3175410 00:32:20.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.274 Nvme0n1 : 3.00 17021.33 66.49 0.00 0.00 0.00 0.00 0.00 00:32:20.274 [2024-11-26T18:22:37.487Z] =================================================================================================================== 00:32:20.274 [2024-11-26T18:22:37.487Z] Total : 17021.33 66.49 0.00 0.00 0.00 0.00 0.00 00:32:20.274 00:32:21.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.215 Nvme0n1 : 4.00 17525.50 68.46 0.00 0.00 0.00 0.00 0.00 00:32:21.215 [2024-11-26T18:22:38.428Z] =================================================================================================================== 00:32:21.215 [2024-11-26T18:22:38.428Z] Total : 17525.50 68.46 0.00 0.00 0.00 0.00 0.00 00:32:21.215 00:32:22.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.597 Nvme0n1 : 5.00 19052.00 74.42 0.00 0.00 0.00 0.00 0.00 00:32:22.597 [2024-11-26T18:22:39.810Z] =================================================================================================================== 00:32:22.597 [2024-11-26T18:22:39.810Z] Total : 19052.00 74.42 0.00 0.00 0.00 0.00 0.00 00:32:22.597 00:32:23.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.537 Nvme0n1 : 6.00 20067.67 78.39 0.00 0.00 0.00 0.00 0.00 00:32:23.537 [2024-11-26T18:22:40.750Z] =================================================================================================================== 00:32:23.537 [2024-11-26T18:22:40.750Z] Total : 20067.67 78.39 0.00 0.00 0.00 0.00 0.00 00:32:23.537 00:32:24.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.478 Nvme0n1 : 7.00 20811.29 81.29 0.00 0.00 0.00 0.00 0.00 00:32:24.478 [2024-11-26T18:22:41.691Z] =================================================================================================================== 00:32:24.478 [2024-11-26T18:22:41.691Z] Total : 20811.29 81.29 0.00 0.00 0.00 0.00 0.00 00:32:24.478 00:32:25.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.418 Nvme0n1 : 8.00 21361.12 83.44 0.00 0.00 0.00 0.00 0.00 00:32:25.418 [2024-11-26T18:22:42.631Z] =================================================================================================================== 00:32:25.418 [2024-11-26T18:22:42.631Z] Total : 21361.12 83.44 0.00 0.00 0.00 0.00 0.00 00:32:25.418 00:32:26.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.358 Nvme0n1 : 9.00 21794.11 85.13 0.00 0.00 0.00 0.00 0.00 00:32:26.358 [2024-11-26T18:22:43.571Z] =================================================================================================================== 00:32:26.358 [2024-11-26T18:22:43.571Z] Total : 21794.11 85.13 0.00 0.00 0.00 0.00 0.00 00:32:26.358 00:32:27.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.299 Nvme0n1 : 10.00 22135.90 86.47 0.00 0.00 0.00 0.00 0.00 00:32:27.299 [2024-11-26T18:22:44.512Z] =================================================================================================================== 00:32:27.299 [2024-11-26T18:22:44.512Z] Total : 22135.90 86.47 0.00 0.00 0.00 0.00 0.00 00:32:27.299 00:32:27.299 00:32:27.299 Latency(us) 00:32:27.299 [2024-11-26T18:22:44.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.299 Nvme0n1 : 10.00 22142.24 86.49 0.00 0.00 5777.69 2894.51 29928.11 00:32:27.299 [2024-11-26T18:22:44.512Z] =================================================================================================================== 00:32:27.299 [2024-11-26T18:22:44.512Z] Total : 22142.24 86.49 0.00 0.00 5777.69 2894.51 29928.11 00:32:27.299 { 00:32:27.299 "results": [ 00:32:27.299 { 00:32:27.299 "job": "Nvme0n1", 00:32:27.299 "core_mask": "0x2", 00:32:27.299 "workload": "randwrite", 00:32:27.299 "status": "finished", 00:32:27.299 "queue_depth": 128, 00:32:27.299 "io_size": 4096, 00:32:27.299 "runtime": 10.002918, 00:32:27.299 "iops": 22142.23889469053, 00:32:27.299 "mibps": 86.49312068238488, 00:32:27.299 "io_failed": 0, 00:32:27.299 "io_timeout": 0, 00:32:27.299 "avg_latency_us": 5777.686689211255, 00:32:27.299 "min_latency_us": 2894.5066666666667, 00:32:27.299 "max_latency_us": 29928.106666666667 00:32:27.299 } 00:32:27.299 ], 00:32:27.299 "core_count": 1 00:32:27.299 } 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3175181 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3175181 ']' 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3175181 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.299 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175181 00:32:27.669 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.669 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.670 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175181' 00:32:27.670 killing process with pid 3175181 00:32:27.670 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3175181 00:32:27.670 Received shutdown signal, test time was about 10.000000 seconds 00:32:27.670 00:32:27.670 Latency(us) 00:32:27.670 [2024-11-26T18:22:44.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.670 [2024-11-26T18:22:44.883Z] =================================================================================================================== 00:32:27.670 [2024-11-26T18:22:44.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.670 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3175181 00:32:27.670 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:27.670 19:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.971 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:27.971 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:28.238 [2024-11-26 19:22:45.365606] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:28.238 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:28.498 request: 00:32:28.498 { 00:32:28.498 "uuid": "af34fb36-3203-4c10-8f42-a75bdf7360d5", 00:32:28.498 "method": "bdev_lvol_get_lvstores", 00:32:28.498 "req_id": 1 00:32:28.498 } 00:32:28.498 Got JSON-RPC error response 00:32:28.498 response: 00:32:28.498 { 00:32:28.498 "code": -19, 00:32:28.498 "message": "No such device" 00:32:28.498 } 00:32:28.498 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:28.498 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.498 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.499 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.499 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:28.759 aio_bdev 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb001c7d-c78e-42aa-9ba3-c107f4e5daaf 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cb001c7d-c78e-42aa-9ba3-c107f4e5daaf 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:28.759 19:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb001c7d-c78e-42aa-9ba3-c107f4e5daaf -t 2000 00:32:29.019 [ 00:32:29.019 { 00:32:29.019 "name": "cb001c7d-c78e-42aa-9ba3-c107f4e5daaf", 00:32:29.019 "aliases": [ 00:32:29.019 "lvs/lvol" 00:32:29.019 ], 00:32:29.019 "product_name": "Logical Volume", 00:32:29.019 "block_size": 4096, 00:32:29.019 "num_blocks": 38912, 00:32:29.019 "uuid": "cb001c7d-c78e-42aa-9ba3-c107f4e5daaf", 00:32:29.019 "assigned_rate_limits": { 00:32:29.019 "rw_ios_per_sec": 0, 00:32:29.019 "rw_mbytes_per_sec": 0, 00:32:29.019 "r_mbytes_per_sec": 0, 00:32:29.019 "w_mbytes_per_sec": 0 00:32:29.019 }, 00:32:29.019 "claimed": false, 00:32:29.019 "zoned": false, 00:32:29.019 "supported_io_types": { 00:32:29.019 "read": true, 00:32:29.019 "write": true, 00:32:29.019 "unmap": true, 00:32:29.019 "flush": false, 00:32:29.019 "reset": true, 00:32:29.019 "nvme_admin": false, 00:32:29.019 "nvme_io": false, 00:32:29.019 "nvme_io_md": false, 00:32:29.019 "write_zeroes": true, 00:32:29.019 "zcopy": false, 00:32:29.019 "get_zone_info": false, 00:32:29.019 "zone_management": false, 00:32:29.019 "zone_append": false, 00:32:29.019 "compare": false, 00:32:29.019 "compare_and_write": false, 00:32:29.019 "abort": false, 00:32:29.019 "seek_hole": true, 00:32:29.019 "seek_data": true, 00:32:29.019 "copy": false, 00:32:29.019 "nvme_iov_md": false 00:32:29.019 }, 00:32:29.019 "driver_specific": { 00:32:29.019 "lvol": { 00:32:29.019 "lvol_store_uuid": "af34fb36-3203-4c10-8f42-a75bdf7360d5", 00:32:29.019 "base_bdev": "aio_bdev", 00:32:29.019 "thin_provision": false, 00:32:29.019 "num_allocated_clusters": 38, 00:32:29.019 "snapshot": false, 00:32:29.019 "clone": false, 00:32:29.019 "esnap_clone": false 00:32:29.019 } 00:32:29.019 } 00:32:29.019 } 00:32:29.019 ] 00:32:29.019 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:29.019 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:29.019 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:29.280 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:29.280 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:29.280 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:29.280 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:29.280 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb001c7d-c78e-42aa-9ba3-c107f4e5daaf 00:32:29.541 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af34fb36-3203-4c10-8f42-a75bdf7360d5 00:32:29.802 19:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.063 00:32:30.063 real 0m15.943s 00:32:30.063 user 0m15.596s 00:32:30.063 sys 0m1.455s 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.063 ************************************ 00:32:30.063 END TEST lvs_grow_clean 00:32:30.063 ************************************ 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:30.063 ************************************ 00:32:30.063 START TEST lvs_grow_dirty 00:32:30.063 ************************************ 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.063 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:30.323 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:30.323 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:30.584 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d lvol 150 00:32:30.844 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:30.844 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:30.844 19:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:31.104 [2024-11-26 19:22:48.069529] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:31.104 [2024-11-26 19:22:48.069698] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:31.104 true 00:32:31.105 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:31.105 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:31.105 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:31.105 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:31.365 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:31.627 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.627 [2024-11-26 19:22:48.754152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.627 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.887 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3178396 00:32:31.887 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.887 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3178396 /var/tmp/bdevperf.sock 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3178396 ']' 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.888 19:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.888 [2024-11-26 19:22:49.025962] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:31.888 [2024-11-26 19:22:49.026034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178396 ] 00:32:32.156 [2024-11-26 19:22:49.114381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.156 [2024-11-26 19:22:49.148800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.726 19:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.726 19:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:32.726 19:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:32.986 Nvme0n1 00:32:32.986 19:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:33.246 [ 00:32:33.246 { 00:32:33.246 "name": "Nvme0n1", 00:32:33.246 "aliases": [ 00:32:33.246 "e91b6960-f0e4-44da-b731-a62924bfa7ac" 00:32:33.247 ], 00:32:33.247 "product_name": "NVMe disk", 00:32:33.247 "block_size": 4096, 00:32:33.247 "num_blocks": 38912, 00:32:33.247 "uuid": "e91b6960-f0e4-44da-b731-a62924bfa7ac", 00:32:33.247 "numa_id": 0, 00:32:33.247 "assigned_rate_limits": { 00:32:33.247 "rw_ios_per_sec": 0, 00:32:33.247 "rw_mbytes_per_sec": 0, 00:32:33.247 "r_mbytes_per_sec": 0, 00:32:33.247 "w_mbytes_per_sec": 0 00:32:33.247 }, 00:32:33.247 "claimed": false, 00:32:33.247 "zoned": false, 00:32:33.247 "supported_io_types": { 00:32:33.247 "read": true, 00:32:33.247 "write": true, 00:32:33.247 "unmap": true, 00:32:33.247 "flush": true, 00:32:33.247 "reset": true, 00:32:33.247 "nvme_admin": true, 00:32:33.247 "nvme_io": true, 00:32:33.247 "nvme_io_md": false, 00:32:33.247 "write_zeroes": true, 00:32:33.247 "zcopy": false, 00:32:33.247 "get_zone_info": false, 00:32:33.247 "zone_management": false, 00:32:33.247 "zone_append": false, 00:32:33.247 "compare": true, 00:32:33.247 "compare_and_write": true, 00:32:33.247 "abort": true, 00:32:33.247 "seek_hole": false, 00:32:33.247 "seek_data": false, 00:32:33.247 "copy": true, 00:32:33.247 "nvme_iov_md": false 00:32:33.247 }, 00:32:33.247 "memory_domains": [ 00:32:33.247 { 00:32:33.247 "dma_device_id": "system", 00:32:33.247 "dma_device_type": 1 00:32:33.247 } 00:32:33.247 ], 00:32:33.247 "driver_specific": { 00:32:33.247 "nvme": [ 00:32:33.247 { 00:32:33.247 "trid": { 00:32:33.247 "trtype": "TCP", 00:32:33.247 "adrfam": "IPv4", 00:32:33.247 "traddr": "10.0.0.2", 00:32:33.247 "trsvcid": "4420", 00:32:33.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:33.247 }, 00:32:33.247 "ctrlr_data": { 00:32:33.247 "cntlid": 1, 00:32:33.247 "vendor_id": "0x8086", 00:32:33.247 "model_number": "SPDK bdev Controller", 00:32:33.247 "serial_number": "SPDK0", 00:32:33.247 "firmware_revision": "25.01", 00:32:33.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.247 "oacs": { 00:32:33.247 "security": 0, 00:32:33.247 "format": 0, 00:32:33.247 "firmware": 0, 00:32:33.247 "ns_manage": 0 00:32:33.247 }, 00:32:33.247 "multi_ctrlr": true, 00:32:33.247 "ana_reporting": false 00:32:33.247 }, 00:32:33.247 "vs": { 00:32:33.247 "nvme_version": "1.3" 00:32:33.247 }, 00:32:33.247 "ns_data": { 00:32:33.247 "id": 1, 00:32:33.247 "can_share": true 00:32:33.247 } 00:32:33.247 } 00:32:33.247 ], 00:32:33.247 "mp_policy": "active_passive" 00:32:33.247 } 00:32:33.247 } 00:32:33.247 ] 00:32:33.247 19:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3178548 00:32:33.247 19:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:33.247 19:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:33.247 Running I/O for 10 seconds... 00:32:34.190 Latency(us) 00:32:34.190 [2024-11-26T18:22:51.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:34.190 Nvme0n1 : 1.00 17282.00 67.51 0.00 0.00 0.00 0.00 0.00 00:32:34.190 [2024-11-26T18:22:51.403Z] =================================================================================================================== 00:32:34.190 [2024-11-26T18:22:51.403Z] Total : 17282.00 67.51 0.00 0.00 0.00 0.00 0.00 00:32:34.190 00:32:35.130 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:35.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.391 Nvme0n1 : 2.00 17531.00 68.48 0.00 0.00 0.00 0.00 0.00 00:32:35.391 [2024-11-26T18:22:52.604Z] =================================================================================================================== 00:32:35.391 [2024-11-26T18:22:52.604Z] Total : 17531.00 68.48 0.00 0.00 0.00 0.00 0.00 00:32:35.391 00:32:35.391 true 00:32:35.391 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:35.391 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:35.651 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:35.651 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:35.651 19:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3178548 00:32:36.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.224 Nvme0n1 : 3.00 17614.00 68.80 0.00 0.00 0.00 0.00 0.00 00:32:36.224 [2024-11-26T18:22:53.437Z] =================================================================================================================== 00:32:36.224 [2024-11-26T18:22:53.437Z] Total : 17614.00 68.80 0.00 0.00 0.00 0.00 0.00 00:32:36.224 00:32:37.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.165 Nvme0n1 : 4.00 17687.25 69.09 0.00 0.00 0.00 0.00 0.00 00:32:37.165 [2024-11-26T18:22:54.378Z] =================================================================================================================== 00:32:37.165 [2024-11-26T18:22:54.378Z] Total : 17687.25 69.09 0.00 0.00 0.00 0.00 0.00 00:32:37.165 00:32:38.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.551 Nvme0n1 : 5.00 18340.80 71.64 0.00 0.00 0.00 0.00 0.00 00:32:38.551 [2024-11-26T18:22:55.764Z] =================================================================================================================== 00:32:38.551 [2024-11-26T18:22:55.764Z] Total : 18340.80 71.64 0.00 0.00 0.00 0.00 0.00 00:32:38.551 00:32:39.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.492 Nvme0n1 : 6.00 19485.67 76.12 0.00 0.00 0.00 0.00 0.00 00:32:39.492 [2024-11-26T18:22:56.705Z] =================================================================================================================== 00:32:39.492 [2024-11-26T18:22:56.705Z] Total : 19485.67 76.12 0.00 0.00 0.00 0.00 0.00 00:32:39.492 00:32:40.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.448 Nvme0n1 : 7.00 20321.57 79.38 0.00 0.00 0.00 0.00 0.00 00:32:40.448 [2024-11-26T18:22:57.661Z] =================================================================================================================== 00:32:40.448 [2024-11-26T18:22:57.661Z] Total : 20321.57 79.38 0.00 0.00 0.00 0.00 0.00 00:32:40.448 00:32:41.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.389 Nvme0n1 : 8.00 20940.50 81.80 0.00 0.00 0.00 0.00 0.00 00:32:41.389 [2024-11-26T18:22:58.602Z] =================================================================================================================== 00:32:41.389 [2024-11-26T18:22:58.602Z] Total : 20940.50 81.80 0.00 0.00 0.00 0.00 0.00 00:32:41.389 00:32:42.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.328 Nvme0n1 : 9.00 21421.89 83.68 0.00 0.00 0.00 0.00 0.00 00:32:42.328 [2024-11-26T18:22:59.541Z] =================================================================================================================== 00:32:42.328 [2024-11-26T18:22:59.541Z] Total : 21421.89 83.68 0.00 0.00 0.00 0.00 0.00 00:32:42.328 00:32:43.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.265 Nvme0n1 : 10.00 21807.00 85.18 0.00 0.00 0.00 0.00 0.00 00:32:43.265 [2024-11-26T18:23:00.478Z] =================================================================================================================== 00:32:43.265 [2024-11-26T18:23:00.478Z] Total : 21807.00 85.18 0.00 0.00 0.00 0.00 0.00 00:32:43.265 00:32:43.265 00:32:43.265 Latency(us) 00:32:43.265 [2024-11-26T18:23:00.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.265 Nvme0n1 : 10.00 21810.80 85.20 0.00 0.00 5866.24 2894.51 29272.75 00:32:43.265 [2024-11-26T18:23:00.478Z] =================================================================================================================== 00:32:43.265 [2024-11-26T18:23:00.478Z] Total : 21810.80 85.20 0.00 0.00 5866.24 2894.51 29272.75 00:32:43.265 { 00:32:43.265 "results": [ 00:32:43.265 { 00:32:43.265 "job": "Nvme0n1", 00:32:43.265 "core_mask": "0x2", 00:32:43.265 "workload": "randwrite", 00:32:43.265 "status": "finished", 00:32:43.265 "queue_depth": 128, 00:32:43.265 "io_size": 4096, 00:32:43.265 "runtime": 10.004128, 00:32:43.265 "iops": 21810.796503203477, 00:32:43.265 "mibps": 85.19842384063858, 00:32:43.265 "io_failed": 0, 00:32:43.265 "io_timeout": 0, 00:32:43.265 "avg_latency_us": 5866.241161513855, 00:32:43.265 "min_latency_us": 2894.5066666666667, 00:32:43.265 "max_latency_us": 29272.746666666666 00:32:43.265 } 00:32:43.265 ], 00:32:43.265 "core_count": 1 00:32:43.265 } 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3178396 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3178396 ']' 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3178396 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3178396 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3178396' 00:32:43.265 killing process with pid 3178396 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3178396 00:32:43.265 Received shutdown signal, test time was about 10.000000 seconds 00:32:43.265 00:32:43.265 Latency(us) 00:32:43.265 [2024-11-26T18:23:00.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.265 [2024-11-26T18:23:00.478Z] =================================================================================================================== 00:32:43.265 [2024-11-26T18:23:00.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:43.265 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3178396 00:32:43.523 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:43.782 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:43.782 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:43.782 19:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3174414 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3174414 00:32:44.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3174414 Killed "${NVMF_APP[@]}" "$@" 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3180760 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3180760 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3180760 ']' 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.043 19:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:44.302 [2024-11-26 19:23:01.273450] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.302 [2024-11-26 19:23:01.274811] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:44.302 [2024-11-26 19:23:01.274874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.302 [2024-11-26 19:23:01.369862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.302 [2024-11-26 19:23:01.403238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.302 [2024-11-26 19:23:01.403268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.302 [2024-11-26 19:23:01.403274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.302 [2024-11-26 19:23:01.403279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.302 [2024-11-26 19:23:01.403283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.302 [2024-11-26 19:23:01.403773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.302 [2024-11-26 19:23:01.457028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:44.302 [2024-11-26 19:23:01.457223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.871 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.872 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:44.872 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.872 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.872 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:45.133 [2024-11-26 19:23:02.278290] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:45.133 [2024-11-26 19:23:02.278539] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:45.133 [2024-11-26 19:23:02.278630] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:45.133 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:45.394 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e91b6960-f0e4-44da-b731-a62924bfa7ac -t 2000 00:32:45.654 [ 00:32:45.654 { 00:32:45.654 "name": "e91b6960-f0e4-44da-b731-a62924bfa7ac", 00:32:45.654 "aliases": [ 00:32:45.654 "lvs/lvol" 00:32:45.654 ], 00:32:45.654 "product_name": "Logical Volume", 00:32:45.654 "block_size": 4096, 00:32:45.654 "num_blocks": 38912, 00:32:45.654 "uuid": "e91b6960-f0e4-44da-b731-a62924bfa7ac", 00:32:45.654 "assigned_rate_limits": { 00:32:45.654 "rw_ios_per_sec": 0, 00:32:45.654 "rw_mbytes_per_sec": 0, 00:32:45.654 "r_mbytes_per_sec": 0, 00:32:45.654 "w_mbytes_per_sec": 0 00:32:45.654 }, 00:32:45.654 "claimed": false, 00:32:45.654 "zoned": false, 00:32:45.654 "supported_io_types": { 00:32:45.654 "read": true, 00:32:45.654 "write": true, 00:32:45.654 "unmap": true, 00:32:45.654 "flush": false, 00:32:45.654 "reset": true, 00:32:45.654 "nvme_admin": false, 00:32:45.654 "nvme_io": false, 00:32:45.654 "nvme_io_md": false, 00:32:45.654 "write_zeroes": true, 00:32:45.654 "zcopy": false, 00:32:45.654 "get_zone_info": false, 00:32:45.654 "zone_management": false, 00:32:45.654 "zone_append": false, 00:32:45.654 "compare": false, 00:32:45.654 "compare_and_write": false, 00:32:45.654 "abort": false, 00:32:45.654 "seek_hole": true, 00:32:45.654 "seek_data": true, 00:32:45.654 "copy": false, 00:32:45.654 "nvme_iov_md": false 00:32:45.654 }, 00:32:45.654 "driver_specific": { 00:32:45.654 "lvol": { 00:32:45.654 "lvol_store_uuid": "117afd29-b88d-4c10-a0e7-1b46fa33de3d", 00:32:45.654 "base_bdev": "aio_bdev", 00:32:45.654 "thin_provision": false, 00:32:45.654 "num_allocated_clusters": 38, 00:32:45.654 "snapshot": false, 00:32:45.654 "clone": false, 00:32:45.654 "esnap_clone": false 00:32:45.654 } 00:32:45.654 } 00:32:45.654 } 00:32:45.654 ] 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:45.654 19:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:45.915 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:45.915 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:46.175 [2024-11-26 19:23:03.204352] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:46.175 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:46.435 request: 00:32:46.435 { 00:32:46.435 "uuid": "117afd29-b88d-4c10-a0e7-1b46fa33de3d", 00:32:46.435 "method": "bdev_lvol_get_lvstores", 00:32:46.435 "req_id": 1 00:32:46.435 } 00:32:46.435 Got JSON-RPC error response 00:32:46.435 response: 00:32:46.435 { 00:32:46.435 "code": -19, 00:32:46.435 "message": "No such device" 00:32:46.435 } 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.435 aio_bdev 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:46.435 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.695 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e91b6960-f0e4-44da-b731-a62924bfa7ac -t 2000 00:32:46.956 [ 00:32:46.956 { 00:32:46.956 "name": "e91b6960-f0e4-44da-b731-a62924bfa7ac", 00:32:46.956 "aliases": [ 00:32:46.956 "lvs/lvol" 00:32:46.956 ], 00:32:46.956 "product_name": "Logical Volume", 00:32:46.956 "block_size": 4096, 00:32:46.956 "num_blocks": 38912, 00:32:46.956 "uuid": "e91b6960-f0e4-44da-b731-a62924bfa7ac", 00:32:46.956 "assigned_rate_limits": { 00:32:46.956 "rw_ios_per_sec": 0, 00:32:46.956 "rw_mbytes_per_sec": 0, 00:32:46.956 "r_mbytes_per_sec": 0, 00:32:46.956 "w_mbytes_per_sec": 0 00:32:46.956 }, 00:32:46.956 "claimed": false, 00:32:46.956 "zoned": false, 00:32:46.956 "supported_io_types": { 00:32:46.956 "read": true, 00:32:46.956 "write": true, 00:32:46.956 "unmap": true, 00:32:46.956 "flush": false, 00:32:46.956 "reset": true, 00:32:46.956 "nvme_admin": false, 00:32:46.956 "nvme_io": false, 00:32:46.956 "nvme_io_md": false, 00:32:46.956 "write_zeroes": true, 00:32:46.956 "zcopy": false, 00:32:46.956 "get_zone_info": false, 00:32:46.956 "zone_management": false, 00:32:46.956 "zone_append": false, 00:32:46.956 "compare": false, 00:32:46.956 "compare_and_write": false, 00:32:46.956 "abort": false, 00:32:46.956 "seek_hole": true, 00:32:46.956 "seek_data": true, 00:32:46.956 "copy": false, 00:32:46.956 "nvme_iov_md": false 00:32:46.956 }, 00:32:46.956 "driver_specific": { 00:32:46.956 "lvol": { 00:32:46.956 "lvol_store_uuid": "117afd29-b88d-4c10-a0e7-1b46fa33de3d", 00:32:46.956 "base_bdev": "aio_bdev", 00:32:46.956 "thin_provision": false, 00:32:46.956 "num_allocated_clusters": 38, 00:32:46.956 "snapshot": false, 00:32:46.956 "clone": false, 00:32:46.956 "esnap_clone": false 00:32:46.956 } 00:32:46.956 } 00:32:46.956 } 00:32:46.956 ] 00:32:46.956 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:46.956 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:46.956 19:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:47.217 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:47.217 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:47.217 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:47.217 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:47.217 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e91b6960-f0e4-44da-b731-a62924bfa7ac 00:32:47.477 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 117afd29-b88d-4c10-a0e7-1b46fa33de3d 00:32:47.737 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.998 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.998 00:32:47.998 real 0m17.841s 00:32:47.998 user 0m35.659s 00:32:47.998 sys 0m3.198s 00:32:47.998 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.998 19:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:47.998 ************************************ 00:32:47.998 END TEST lvs_grow_dirty 00:32:47.998 ************************************ 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:47.998 nvmf_trace.0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.998 rmmod nvme_tcp 00:32:47.998 rmmod nvme_fabrics 00:32:47.998 rmmod nvme_keyring 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3180760 ']' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3180760 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3180760 ']' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3180760 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.998 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3180760 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3180760' 00:32:48.258 killing process with pid 3180760 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3180760 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3180760 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.258 19:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.798 00:32:50.798 real 0m45.266s 00:32:50.798 user 0m54.233s 00:32:50.798 sys 0m10.897s 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:50.798 ************************************ 00:32:50.798 END TEST nvmf_lvs_grow 00:32:50.798 ************************************ 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.798 ************************************ 00:32:50.798 START TEST nvmf_bdev_io_wait 00:32:50.798 ************************************ 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:50.798 * Looking for test storage... 00:32:50.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:50.798 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:50.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.799 --rc genhtml_branch_coverage=1 00:32:50.799 --rc genhtml_function_coverage=1 00:32:50.799 --rc genhtml_legend=1 00:32:50.799 --rc geninfo_all_blocks=1 00:32:50.799 --rc geninfo_unexecuted_blocks=1 00:32:50.799 00:32:50.799 ' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:50.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.799 --rc genhtml_branch_coverage=1 00:32:50.799 --rc genhtml_function_coverage=1 00:32:50.799 --rc genhtml_legend=1 00:32:50.799 --rc geninfo_all_blocks=1 00:32:50.799 --rc geninfo_unexecuted_blocks=1 00:32:50.799 00:32:50.799 ' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:50.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.799 --rc genhtml_branch_coverage=1 00:32:50.799 --rc genhtml_function_coverage=1 00:32:50.799 --rc genhtml_legend=1 00:32:50.799 --rc geninfo_all_blocks=1 00:32:50.799 --rc geninfo_unexecuted_blocks=1 00:32:50.799 00:32:50.799 ' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:50.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.799 --rc genhtml_branch_coverage=1 00:32:50.799 --rc genhtml_function_coverage=1 00:32:50.799 --rc genhtml_legend=1 00:32:50.799 --rc geninfo_all_blocks=1 00:32:50.799 --rc geninfo_unexecuted_blocks=1 00:32:50.799 00:32:50.799 ' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.799 19:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:58.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:58.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.936 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:58.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:58.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:32:58.937 00:32:58.937 --- 10.0.0.2 ping statistics --- 00:32:58.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.937 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:58.937 00:32:58.937 --- 10.0.0.1 ping statistics --- 00:32:58.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.937 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3185761 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3185761 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3185761 ']' 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.937 19:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:58.937 [2024-11-26 19:23:15.462081] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.937 [2024-11-26 19:23:15.463240] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:58.937 [2024-11-26 19:23:15.463297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.937 [2024-11-26 19:23:15.563996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.937 [2024-11-26 19:23:15.618585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.937 [2024-11-26 19:23:15.618635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.937 [2024-11-26 19:23:15.618644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.937 [2024-11-26 19:23:15.618651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.937 [2024-11-26 19:23:15.618657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.937 [2024-11-26 19:23:15.620715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.937 [2024-11-26 19:23:15.620870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.937 [2024-11-26 19:23:15.621033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.937 [2024-11-26 19:23:15.621034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.938 [2024-11-26 19:23:15.621383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 [2024-11-26 19:23:16.390456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:59.198 [2024-11-26 19:23:16.390999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:59.198 [2024-11-26 19:23:16.391057] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:59.198 [2024-11-26 19:23:16.391246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.198 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.198 [2024-11-26 19:23:16.401594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.461 Malloc0 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.461 [2024-11-26 19:23:16.478156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3186091 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3186094 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.461 { 00:32:59.461 "params": { 00:32:59.461 "name": "Nvme$subsystem", 00:32:59.461 "trtype": "$TEST_TRANSPORT", 00:32:59.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.461 "adrfam": "ipv4", 00:32:59.461 "trsvcid": "$NVMF_PORT", 00:32:59.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.461 "hdgst": ${hdgst:-false}, 00:32:59.461 "ddgst": ${ddgst:-false} 00:32:59.461 }, 00:32:59.461 "method": "bdev_nvme_attach_controller" 00:32:59.461 } 00:32:59.461 EOF 00:32:59.461 )") 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3186097 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.461 { 00:32:59.461 "params": { 00:32:59.461 "name": "Nvme$subsystem", 00:32:59.461 "trtype": "$TEST_TRANSPORT", 00:32:59.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.461 "adrfam": "ipv4", 00:32:59.461 "trsvcid": "$NVMF_PORT", 00:32:59.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.461 "hdgst": ${hdgst:-false}, 00:32:59.461 "ddgst": ${ddgst:-false} 00:32:59.461 }, 00:32:59.461 "method": "bdev_nvme_attach_controller" 00:32:59.461 } 00:32:59.461 EOF 00:32:59.461 )") 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3186101 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.461 { 00:32:59.461 "params": { 00:32:59.461 "name": "Nvme$subsystem", 00:32:59.461 "trtype": "$TEST_TRANSPORT", 00:32:59.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.461 "adrfam": "ipv4", 00:32:59.461 "trsvcid": "$NVMF_PORT", 00:32:59.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.461 "hdgst": ${hdgst:-false}, 00:32:59.461 "ddgst": ${ddgst:-false} 00:32:59.461 }, 00:32:59.461 "method": "bdev_nvme_attach_controller" 00:32:59.461 } 00:32:59.461 EOF 00:32:59.461 )") 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.461 { 00:32:59.461 "params": { 00:32:59.461 "name": "Nvme$subsystem", 00:32:59.461 "trtype": "$TEST_TRANSPORT", 00:32:59.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.461 "adrfam": "ipv4", 00:32:59.461 "trsvcid": "$NVMF_PORT", 00:32:59.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.461 "hdgst": ${hdgst:-false}, 00:32:59.461 "ddgst": ${ddgst:-false} 00:32:59.461 }, 00:32:59.461 "method": "bdev_nvme_attach_controller" 00:32:59.461 } 00:32:59.461 EOF 00:32:59.461 )") 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3186091 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:59.461 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:59.462 "params": { 00:32:59.462 "name": "Nvme1", 00:32:59.462 "trtype": "tcp", 00:32:59.462 "traddr": "10.0.0.2", 00:32:59.462 "adrfam": "ipv4", 00:32:59.462 "trsvcid": "4420", 00:32:59.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.462 "hdgst": false, 00:32:59.462 "ddgst": false 00:32:59.462 }, 00:32:59.462 "method": "bdev_nvme_attach_controller" 00:32:59.462 }' 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:59.462 "params": { 00:32:59.462 "name": "Nvme1", 00:32:59.462 "trtype": "tcp", 00:32:59.462 "traddr": "10.0.0.2", 00:32:59.462 "adrfam": "ipv4", 00:32:59.462 "trsvcid": "4420", 00:32:59.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.462 "hdgst": false, 00:32:59.462 "ddgst": false 00:32:59.462 }, 00:32:59.462 "method": "bdev_nvme_attach_controller" 00:32:59.462 }' 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:59.462 "params": { 00:32:59.462 "name": "Nvme1", 00:32:59.462 "trtype": "tcp", 00:32:59.462 "traddr": "10.0.0.2", 00:32:59.462 "adrfam": "ipv4", 00:32:59.462 "trsvcid": "4420", 00:32:59.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.462 "hdgst": false, 00:32:59.462 "ddgst": false 00:32:59.462 }, 00:32:59.462 "method": "bdev_nvme_attach_controller" 00:32:59.462 }' 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:59.462 19:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:59.462 "params": { 00:32:59.462 "name": "Nvme1", 00:32:59.462 "trtype": "tcp", 00:32:59.462 "traddr": "10.0.0.2", 00:32:59.462 "adrfam": "ipv4", 00:32:59.462 "trsvcid": "4420", 00:32:59.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.462 "hdgst": false, 00:32:59.462 "ddgst": false 00:32:59.462 }, 00:32:59.462 "method": "bdev_nvme_attach_controller" 00:32:59.462 }' 00:32:59.462 [2024-11-26 19:23:16.537196] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:59.462 [2024-11-26 19:23:16.537273] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:59.462 [2024-11-26 19:23:16.538089] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:59.462 [2024-11-26 19:23:16.538167] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:59.462 [2024-11-26 19:23:16.540748] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:59.462 [2024-11-26 19:23:16.540747] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:32:59.462 [2024-11-26 19:23:16.540821] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 19:23:16.540827] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:59.462 --proc-type=auto ] 00:32:59.723 [2024-11-26 19:23:16.764047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.723 [2024-11-26 19:23:16.804230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:59.723 [2024-11-26 19:23:16.858755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.723 [2024-11-26 19:23:16.900019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:59.723 [2024-11-26 19:23:16.926697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.982 [2024-11-26 19:23:16.962307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:59.982 [2024-11-26 19:23:16.991753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.982 [2024-11-26 19:23:17.029605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:59.982 Running I/O for 1 seconds... 00:33:00.242 Running I/O for 1 seconds... 00:33:00.242 Running I/O for 1 seconds... 00:33:00.242 Running I/O for 1 seconds... 00:33:01.187 182888.00 IOPS, 714.41 MiB/s 00:33:01.187 Latency(us) 00:33:01.187 [2024-11-26T18:23:18.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.187 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:01.187 Nvme1n1 : 1.00 182523.83 712.98 0.00 0.00 697.13 296.96 1979.73 00:33:01.187 [2024-11-26T18:23:18.400Z] =================================================================================================================== 00:33:01.187 [2024-11-26T18:23:18.400Z] Total : 182523.83 712.98 0.00 0.00 697.13 296.96 1979.73 00:33:01.187 7243.00 IOPS, 28.29 MiB/s 00:33:01.187 Latency(us) 00:33:01.187 [2024-11-26T18:23:18.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.187 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:01.187 Nvme1n1 : 1.02 7242.91 28.29 0.00 0.00 17512.03 2239.15 24794.45 00:33:01.187 [2024-11-26T18:23:18.400Z] =================================================================================================================== 00:33:01.187 [2024-11-26T18:23:18.400Z] Total : 7242.91 28.29 0.00 0.00 17512.03 2239.15 24794.45 00:33:01.187 13119.00 IOPS, 51.25 MiB/s 00:33:01.187 Latency(us) 00:33:01.187 [2024-11-26T18:23:18.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.187 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:01.187 Nvme1n1 : 1.01 13173.45 51.46 0.00 0.00 9681.63 5079.04 16165.55 00:33:01.187 [2024-11-26T18:23:18.400Z] =================================================================================================================== 00:33:01.187 [2024-11-26T18:23:18.400Z] Total : 13173.45 51.46 0.00 0.00 9681.63 5079.04 16165.55 00:33:01.187 7198.00 IOPS, 28.12 MiB/s [2024-11-26T18:23:18.400Z] 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3186094 00:33:01.187 00:33:01.187 Latency(us) 00:33:01.187 [2024-11-26T18:23:18.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.187 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:01.187 Nvme1n1 : 1.01 7289.15 28.47 0.00 0.00 17511.58 4205.23 36044.80 00:33:01.187 [2024-11-26T18:23:18.400Z] =================================================================================================================== 00:33:01.187 [2024-11-26T18:23:18.400Z] Total : 7289.15 28.47 0.00 0.00 17511.58 4205.23 36044.80 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3186097 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3186101 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:01.448 rmmod nvme_tcp 00:33:01.448 rmmod nvme_fabrics 00:33:01.448 rmmod nvme_keyring 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3185761 ']' 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3185761 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3185761 ']' 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3185761 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3185761 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3185761' 00:33:01.448 killing process with pid 3185761 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3185761 00:33:01.448 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3185761 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.709 19:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.253 00:33:04.253 real 0m13.309s 00:33:04.253 user 0m16.604s 00:33:04.253 sys 0m7.838s 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:04.253 ************************************ 00:33:04.253 END TEST nvmf_bdev_io_wait 00:33:04.253 ************************************ 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:04.253 ************************************ 00:33:04.253 START TEST nvmf_queue_depth 00:33:04.253 ************************************ 00:33:04.253 19:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:04.253 * Looking for test storage... 00:33:04.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.253 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.254 --rc genhtml_branch_coverage=1 00:33:04.254 --rc genhtml_function_coverage=1 00:33:04.254 --rc genhtml_legend=1 00:33:04.254 --rc geninfo_all_blocks=1 00:33:04.254 --rc geninfo_unexecuted_blocks=1 00:33:04.254 00:33:04.254 ' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.254 --rc genhtml_branch_coverage=1 00:33:04.254 --rc genhtml_function_coverage=1 00:33:04.254 --rc genhtml_legend=1 00:33:04.254 --rc geninfo_all_blocks=1 00:33:04.254 --rc geninfo_unexecuted_blocks=1 00:33:04.254 00:33:04.254 ' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.254 --rc genhtml_branch_coverage=1 00:33:04.254 --rc genhtml_function_coverage=1 00:33:04.254 --rc genhtml_legend=1 00:33:04.254 --rc geninfo_all_blocks=1 00:33:04.254 --rc geninfo_unexecuted_blocks=1 00:33:04.254 00:33:04.254 ' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.254 --rc genhtml_branch_coverage=1 00:33:04.254 --rc genhtml_function_coverage=1 00:33:04.254 --rc genhtml_legend=1 00:33:04.254 --rc geninfo_all_blocks=1 00:33:04.254 --rc geninfo_unexecuted_blocks=1 00:33:04.254 00:33:04.254 ' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.254 19:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:12.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:12.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:12.400 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:12.400 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.400 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:33:12.401 00:33:12.401 --- 10.0.0.2 ping statistics --- 00:33:12.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.401 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:33:12.401 00:33:12.401 --- 10.0.0.1 ping statistics --- 00:33:12.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.401 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3190497 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3190497 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3190497 ']' 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.401 19:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.401 [2024-11-26 19:23:28.782402] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:12.401 [2024-11-26 19:23:28.783558] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:33:12.401 [2024-11-26 19:23:28.783612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.401 [2024-11-26 19:23:28.886055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.401 [2024-11-26 19:23:28.936566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.401 [2024-11-26 19:23:28.936621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.401 [2024-11-26 19:23:28.936629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.401 [2024-11-26 19:23:28.936636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.401 [2024-11-26 19:23:28.936643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.401 [2024-11-26 19:23:28.937409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.401 [2024-11-26 19:23:29.017003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:12.401 [2024-11-26 19:23:29.017306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 [2024-11-26 19:23:29.670276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 Malloc0 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 [2024-11-26 19:23:29.746347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3190824 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3190824 /var/tmp/bdevperf.sock 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3190824 ']' 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.662 19:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.662 [2024-11-26 19:23:29.805068] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:33:12.662 [2024-11-26 19:23:29.805134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190824 ] 00:33:12.922 [2024-11-26 19:23:29.896380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.922 [2024-11-26 19:23:29.949267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.495 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.495 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:13.495 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:13.495 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.495 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.757 NVMe0n1 00:33:13.757 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.758 19:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:13.758 Running I/O for 10 seconds... 00:33:15.649 8206.00 IOPS, 32.05 MiB/s [2024-11-26T18:23:34.246Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-26T18:23:35.186Z] 9218.33 IOPS, 36.01 MiB/s [2024-11-26T18:23:36.127Z] 10157.75 IOPS, 39.68 MiB/s [2024-11-26T18:23:37.067Z] 10752.60 IOPS, 42.00 MiB/s [2024-11-26T18:23:38.010Z] 11227.50 IOPS, 43.86 MiB/s [2024-11-26T18:23:38.953Z] 11559.29 IOPS, 45.15 MiB/s [2024-11-26T18:23:39.895Z] 11786.12 IOPS, 46.04 MiB/s [2024-11-26T18:23:41.281Z] 11990.67 IOPS, 46.84 MiB/s [2024-11-26T18:23:41.281Z] 12176.20 IOPS, 47.56 MiB/s 00:33:24.068 Latency(us) 00:33:24.068 [2024-11-26T18:23:41.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.068 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:24.068 Verification LBA range: start 0x0 length 0x4000 00:33:24.068 NVMe0n1 : 10.06 12203.87 47.67 0.00 0.00 83606.18 24466.77 73837.23 00:33:24.068 [2024-11-26T18:23:41.281Z] =================================================================================================================== 00:33:24.068 [2024-11-26T18:23:41.281Z] Total : 12203.87 47.67 0.00 0.00 83606.18 24466.77 73837.23 00:33:24.068 { 00:33:24.068 "results": [ 00:33:24.068 { 00:33:24.068 "job": "NVMe0n1", 00:33:24.068 "core_mask": "0x1", 00:33:24.069 "workload": "verify", 00:33:24.069 "status": "finished", 00:33:24.069 "verify_range": { 00:33:24.069 "start": 0, 00:33:24.069 "length": 16384 00:33:24.069 }, 00:33:24.069 "queue_depth": 1024, 00:33:24.069 "io_size": 4096, 00:33:24.069 "runtime": 10.061237, 00:33:24.069 "iops": 12203.867178558661, 00:33:24.069 "mibps": 47.67135616624477, 00:33:24.069 "io_failed": 0, 00:33:24.069 "io_timeout": 0, 00:33:24.069 "avg_latency_us": 83606.18410915468, 00:33:24.069 "min_latency_us": 24466.773333333334, 00:33:24.069 "max_latency_us": 73837.22666666667 00:33:24.069 } 00:33:24.069 ], 00:33:24.069 "core_count": 1 00:33:24.069 } 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3190824 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3190824 ']' 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3190824 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.069 19:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190824 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190824' 00:33:24.069 killing process with pid 3190824 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3190824 00:33:24.069 Received shutdown signal, test time was about 10.000000 seconds 00:33:24.069 00:33:24.069 Latency(us) 00:33:24.069 [2024-11-26T18:23:41.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.069 [2024-11-26T18:23:41.282Z] =================================================================================================================== 00:33:24.069 [2024-11-26T18:23:41.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3190824 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.069 rmmod nvme_tcp 00:33:24.069 rmmod nvme_fabrics 00:33:24.069 rmmod nvme_keyring 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3190497 ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3190497 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3190497 ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3190497 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190497 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190497' 00:33:24.069 killing process with pid 3190497 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3190497 00:33:24.069 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3190497 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.330 19:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:26.874 00:33:26.874 real 0m22.500s 00:33:26.874 user 0m24.518s 00:33:26.874 sys 0m7.673s 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.874 ************************************ 00:33:26.874 END TEST nvmf_queue_depth 00:33:26.874 ************************************ 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:26.874 ************************************ 00:33:26.874 START TEST nvmf_target_multipath 00:33:26.874 ************************************ 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:26.874 * Looking for test storage... 00:33:26.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.874 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.875 --rc genhtml_branch_coverage=1 00:33:26.875 --rc genhtml_function_coverage=1 00:33:26.875 --rc genhtml_legend=1 00:33:26.875 --rc geninfo_all_blocks=1 00:33:26.875 --rc geninfo_unexecuted_blocks=1 00:33:26.875 00:33:26.875 ' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.875 --rc genhtml_branch_coverage=1 00:33:26.875 --rc genhtml_function_coverage=1 00:33:26.875 --rc genhtml_legend=1 00:33:26.875 --rc geninfo_all_blocks=1 00:33:26.875 --rc geninfo_unexecuted_blocks=1 00:33:26.875 00:33:26.875 ' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.875 --rc genhtml_branch_coverage=1 00:33:26.875 --rc genhtml_function_coverage=1 00:33:26.875 --rc genhtml_legend=1 00:33:26.875 --rc geninfo_all_blocks=1 00:33:26.875 --rc geninfo_unexecuted_blocks=1 00:33:26.875 00:33:26.875 ' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.875 --rc genhtml_branch_coverage=1 00:33:26.875 --rc genhtml_function_coverage=1 00:33:26.875 --rc genhtml_legend=1 00:33:26.875 --rc geninfo_all_blocks=1 00:33:26.875 --rc geninfo_unexecuted_blocks=1 00:33:26.875 00:33:26.875 ' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:26.875 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.876 19:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:35.016 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:35.016 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:35.016 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:35.016 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.016 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:35.017 19:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:35.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:33:35.017 00:33:35.017 --- 10.0.0.2 ping statistics --- 00:33:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.017 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:33:35.017 00:33:35.017 --- 10.0.0.1 ping statistics --- 00:33:35.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.017 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:35.017 only one NIC for nvmf test 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.017 rmmod nvme_tcp 00:33:35.017 rmmod nvme_fabrics 00:33:35.017 rmmod nvme_keyring 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.017 19:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:36.404 00:33:36.404 real 0m9.791s 00:33:36.404 user 0m2.167s 00:33:36.404 sys 0m5.565s 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:36.404 ************************************ 00:33:36.404 END TEST nvmf_target_multipath 00:33:36.404 ************************************ 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:36.404 ************************************ 00:33:36.404 START TEST nvmf_zcopy 00:33:36.404 ************************************ 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:36.404 * Looking for test storage... 00:33:36.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:36.404 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.667 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:36.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.667 --rc genhtml_branch_coverage=1 00:33:36.667 --rc genhtml_function_coverage=1 00:33:36.667 --rc genhtml_legend=1 00:33:36.667 --rc geninfo_all_blocks=1 00:33:36.667 --rc geninfo_unexecuted_blocks=1 00:33:36.667 00:33:36.667 ' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:36.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.668 --rc genhtml_branch_coverage=1 00:33:36.668 --rc genhtml_function_coverage=1 00:33:36.668 --rc genhtml_legend=1 00:33:36.668 --rc geninfo_all_blocks=1 00:33:36.668 --rc geninfo_unexecuted_blocks=1 00:33:36.668 00:33:36.668 ' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:36.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.668 --rc genhtml_branch_coverage=1 00:33:36.668 --rc genhtml_function_coverage=1 00:33:36.668 --rc genhtml_legend=1 00:33:36.668 --rc geninfo_all_blocks=1 00:33:36.668 --rc geninfo_unexecuted_blocks=1 00:33:36.668 00:33:36.668 ' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:36.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.668 --rc genhtml_branch_coverage=1 00:33:36.668 --rc genhtml_function_coverage=1 00:33:36.668 --rc genhtml_legend=1 00:33:36.668 --rc geninfo_all_blocks=1 00:33:36.668 --rc geninfo_unexecuted_blocks=1 00:33:36.668 00:33:36.668 ' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.668 19:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:44.812 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:44.813 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:44.813 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:44.813 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:44.813 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:44.813 19:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:44.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:44.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.812 ms 00:33:44.813 00:33:44.813 --- 10.0.0.2 ping statistics --- 00:33:44.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.813 rtt min/avg/max/mdev = 0.812/0.812/0.812/0.000 ms 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:44.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:44.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:33:44.813 00:33:44.813 --- 10.0.0.1 ping statistics --- 00:33:44.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.813 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:44.813 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3201226 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3201226 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3201226 ']' 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.814 19:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:44.814 [2024-11-26 19:24:01.199563] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:44.814 [2024-11-26 19:24:01.200684] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:33:44.814 [2024-11-26 19:24:01.200732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.814 [2024-11-26 19:24:01.305172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.814 [2024-11-26 19:24:01.358481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.814 [2024-11-26 19:24:01.358532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.814 [2024-11-26 19:24:01.358541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.814 [2024-11-26 19:24:01.358548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.814 [2024-11-26 19:24:01.358555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.814 [2024-11-26 19:24:01.359304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.814 [2024-11-26 19:24:01.439819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:44.814 [2024-11-26 19:24:01.440126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.075 [2024-11-26 19:24:02.088223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.075 [2024-11-26 19:24:02.116526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.075 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.076 malloc0 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:45.076 { 00:33:45.076 "params": { 00:33:45.076 "name": "Nvme$subsystem", 00:33:45.076 "trtype": "$TEST_TRANSPORT", 00:33:45.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:45.076 "adrfam": "ipv4", 00:33:45.076 "trsvcid": "$NVMF_PORT", 00:33:45.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:45.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:45.076 "hdgst": ${hdgst:-false}, 00:33:45.076 "ddgst": ${ddgst:-false} 00:33:45.076 }, 00:33:45.076 "method": "bdev_nvme_attach_controller" 00:33:45.076 } 00:33:45.076 EOF 00:33:45.076 )") 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:45.076 19:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:45.076 "params": { 00:33:45.076 "name": "Nvme1", 00:33:45.076 "trtype": "tcp", 00:33:45.076 "traddr": "10.0.0.2", 00:33:45.076 "adrfam": "ipv4", 00:33:45.076 "trsvcid": "4420", 00:33:45.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:45.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:45.076 "hdgst": false, 00:33:45.076 "ddgst": false 00:33:45.076 }, 00:33:45.076 "method": "bdev_nvme_attach_controller" 00:33:45.076 }' 00:33:45.076 [2024-11-26 19:24:02.220432] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:33:45.076 [2024-11-26 19:24:02.220496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201597 ] 00:33:45.337 [2024-11-26 19:24:02.302135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.337 [2024-11-26 19:24:02.355651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.598 Running I/O for 10 seconds... 00:33:47.489 6005.00 IOPS, 46.91 MiB/s [2024-11-26T18:24:06.086Z] 6134.00 IOPS, 47.92 MiB/s [2024-11-26T18:24:06.658Z] 6302.67 IOPS, 49.24 MiB/s [2024-11-26T18:24:08.043Z] 6401.50 IOPS, 50.01 MiB/s [2024-11-26T18:24:08.984Z] 6459.80 IOPS, 50.47 MiB/s [2024-11-26T18:24:09.923Z] 6794.33 IOPS, 53.08 MiB/s [2024-11-26T18:24:10.863Z] 7202.14 IOPS, 56.27 MiB/s [2024-11-26T18:24:11.805Z] 7509.50 IOPS, 58.67 MiB/s [2024-11-26T18:24:12.749Z] 7746.44 IOPS, 60.52 MiB/s [2024-11-26T18:24:12.749Z] 7936.90 IOPS, 62.01 MiB/s 00:33:55.536 Latency(us) 00:33:55.536 [2024-11-26T18:24:12.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.536 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:55.536 Verification LBA range: start 0x0 length 0x1000 00:33:55.536 Nvme1n1 : 10.01 7941.40 62.04 0.00 0.00 16076.39 2307.41 29272.75 00:33:55.536 [2024-11-26T18:24:12.749Z] =================================================================================================================== 00:33:55.536 [2024-11-26T18:24:12.749Z] Total : 7941.40 62.04 0.00 0.00 16076.39 2307.41 29272.75 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3204183 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:55.798 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:55.798 { 00:33:55.798 "params": { 00:33:55.798 "name": "Nvme$subsystem", 00:33:55.798 "trtype": "$TEST_TRANSPORT", 00:33:55.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:55.798 "adrfam": "ipv4", 00:33:55.798 "trsvcid": "$NVMF_PORT", 00:33:55.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:55.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:55.798 "hdgst": ${hdgst:-false}, 00:33:55.798 "ddgst": ${ddgst:-false} 00:33:55.798 }, 00:33:55.799 "method": "bdev_nvme_attach_controller" 00:33:55.799 } 00:33:55.799 EOF 00:33:55.799 )") 00:33:55.799 [2024-11-26 19:24:12.783717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.783748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:55.799 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:55.799 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:55.799 19:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:55.799 "params": { 00:33:55.799 "name": "Nvme1", 00:33:55.799 "trtype": "tcp", 00:33:55.799 "traddr": "10.0.0.2", 00:33:55.799 "adrfam": "ipv4", 00:33:55.799 "trsvcid": "4420", 00:33:55.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:55.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:55.799 "hdgst": false, 00:33:55.799 "ddgst": false 00:33:55.799 }, 00:33:55.799 "method": "bdev_nvme_attach_controller" 00:33:55.799 }' 00:33:55.799 [2024-11-26 19:24:12.795679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.795688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.807677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.807685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.819677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.819685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.831018] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:33:55.799 [2024-11-26 19:24:12.831066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204183 ] 00:33:55.799 [2024-11-26 19:24:12.831677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.831685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.843677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.843684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.855677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.799 [2024-11-26 19:24:12.855684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.799 [2024-11-26 19:24:12.867677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.867684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.879677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.879684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.891677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.891684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.903677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.903684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.913783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.800 [2024-11-26 19:24:12.915680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.915690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.927679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.927688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.939677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.939687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.943322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.800 [2024-11-26 19:24:12.951677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.951684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.963683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.800 [2024-11-26 19:24:12.963695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.800 [2024-11-26 19:24:12.975679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.801 [2024-11-26 19:24:12.975692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.801 [2024-11-26 19:24:12.987680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.801 [2024-11-26 19:24:12.987690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:55.801 [2024-11-26 19:24:12.999677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:55.801 [2024-11-26 19:24:12.999685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.011689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.011707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.023680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.023689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.035679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.035689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.047677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.047685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.059677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.059684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.071677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.071685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.083678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.083687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.095680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.095690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 [2024-11-26 19:24:13.107686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.063 [2024-11-26 19:24:13.107701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.063 Running I/O for 5 seconds... 00:33:56.063 [2024-11-26 19:24:13.123073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.123089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.136633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.136649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.151271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.151288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.164237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.164252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.179188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.179203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.192500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.192515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.206866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.206881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.220003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.220017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.234563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.234578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.247202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.247217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.064 [2024-11-26 19:24:13.260844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.064 [2024-11-26 19:24:13.260859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.324 [2024-11-26 19:24:13.274636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.324 [2024-11-26 19:24:13.274652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.324 [2024-11-26 19:24:13.287604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.324 [2024-11-26 19:24:13.287619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.324 [2024-11-26 19:24:13.300635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.324 [2024-11-26 19:24:13.300649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.324 [2024-11-26 19:24:13.314710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.314725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.327974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.327989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.342561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.342576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.355749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.355764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.368526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.368540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.382585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.382600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.395869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.395884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.408767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.408781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.422903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.422918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.435651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.435666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.448330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.448344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.462745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.462760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.475539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.475553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.488833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.488847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.502902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.502917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.516031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.516045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.325 [2024-11-26 19:24:13.531071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.325 [2024-11-26 19:24:13.531086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.544143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.544163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.558564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.558579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.571832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.571846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.584390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.584404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.599496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.599510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.612484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.612498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.626787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.626802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.639774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.639788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.652904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.652919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.666840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.666855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.679788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.679802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.692990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.693005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.706737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.706751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.719618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.719634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.733168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.733186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.747182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.747197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.759967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.759981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.775232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.775246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.586 [2024-11-26 19:24:13.788256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.586 [2024-11-26 19:24:13.788270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.802690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.802705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.815951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.815965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.830470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.830485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.843659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.843673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.856680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.856694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.871329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.871344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.884370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.884384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.899182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.899197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.912046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.912059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.927229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.927243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.940253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.940267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.954794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.954809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.967494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.967508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.980883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.980898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:13.994672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:13.994690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:14.007593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:14.007608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:14.020203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:14.020217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:14.035079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:14.035093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.848 [2024-11-26 19:24:14.048005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.848 [2024-11-26 19:24:14.048018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.062613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.062628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.075939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.075953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.091206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.091221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.104075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.104088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.119120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.119135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 18973.00 IOPS, 148.23 MiB/s [2024-11-26T18:24:14.336Z] [2024-11-26 19:24:14.132287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.132301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.147013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.147028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.160303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.160317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.174337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.174352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.187483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.187498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.200106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.200120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.215457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.215471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.228571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.228585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.242953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.242968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.256034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.256055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.270812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.270828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.284011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.284026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.298655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.298670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.311650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.311665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.123 [2024-11-26 19:24:14.324638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.123 [2024-11-26 19:24:14.324653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.338974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.338989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.351785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.351800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.364380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.364395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.378833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.378848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.391829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.391844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.404480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.404494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.418889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.418905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.431823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.431838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.444811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.444825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.458565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.458579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.471697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.471712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.484789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.484804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.499408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.499423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.512628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.512643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.527023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.527038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.540021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.540035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.555020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.555034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.568033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.568047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.583386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.583401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.596371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.596385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.408 [2024-11-26 19:24:14.611118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.408 [2024-11-26 19:24:14.611133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.624267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.624283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.639474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.639490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.652505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.652519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.667013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.667028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.680246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.680261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.694843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.694858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.708188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.708202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.722994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.723009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.736051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.736065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.751122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.751137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.764245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.764259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.778831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.778846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.791953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.791967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.806567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.806582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.819539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.819554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.833034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.833048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.847027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.847043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.860142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.860157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.875082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.875097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.698 [2024-11-26 19:24:14.888036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.698 [2024-11-26 19:24:14.888050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.903343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.903358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.916517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.916531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.930874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.930888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.943948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.943962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.958684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.958698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.971817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.971831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.984764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.984777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:14.999099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:14.999114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:15.012355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:15.012369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:15.027331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:15.027345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:15.040336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:15.040351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:15.055132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:15.055148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.987 [2024-11-26 19:24:15.068363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.987 [2024-11-26 19:24:15.068378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.082945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.082959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.096125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.096140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.110870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.110884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.123968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.123982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 19031.50 IOPS, 148.68 MiB/s [2024-11-26T18:24:15.201Z] [2024-11-26 19:24:15.138611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.138626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.151328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.151342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.164403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.164417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.988 [2024-11-26 19:24:15.179259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.988 [2024-11-26 19:24:15.179273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.192560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.192574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.207341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.207355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.220482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.220497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.234842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.234856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.247825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.247839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.260709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.260723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.274958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.274972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.288034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.288052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.302816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.302831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.315661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.315676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.328445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.328460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.342903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.342918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.356264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.356278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.370666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.370681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.383538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.383553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.396538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.396552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.410757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.410772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.424072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.424086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.438620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.438635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.260 [2024-11-26 19:24:15.451380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.260 [2024-11-26 19:24:15.451394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.464223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.464237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.478888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.478903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.491966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.491980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.506825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.506840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.519964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.519978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.534424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.534439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.547778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.547797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.560991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.561006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.575001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.575016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.588091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.588105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.602812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.602827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.616023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.616037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.630836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.630850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.644135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.644149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.658443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.658457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.671413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.671427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.684337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.684351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.698603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.698617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.711992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.712006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.530 [2024-11-26 19:24:15.727290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.530 [2024-11-26 19:24:15.727305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.740493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.740508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.755061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.755076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.768296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.768310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.783078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.783092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.795978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.795992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.810228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.810246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.823445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.823460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.836330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.836344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.850948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.850962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.864120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.864134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.878833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.878848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.892000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.892014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.907015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.907029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.920245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.920260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.934600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.934615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.947713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.947729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.961076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.961091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.975133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.975148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:15.987773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:15.987788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.792 [2024-11-26 19:24:16.000486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.792 [2024-11-26 19:24:16.000501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.053 [2024-11-26 19:24:16.014837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.053 [2024-11-26 19:24:16.014852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.053 [2024-11-26 19:24:16.028181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.053 [2024-11-26 19:24:16.028195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.053 [2024-11-26 19:24:16.042851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.053 [2024-11-26 19:24:16.042867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.053 [2024-11-26 19:24:16.055682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.053 [2024-11-26 19:24:16.055696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.053 [2024-11-26 19:24:16.068456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.068474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.082585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.082600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.095229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.095245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.108659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.108673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.122816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.122831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 19006.67 IOPS, 148.49 MiB/s [2024-11-26T18:24:16.267Z] [2024-11-26 19:24:16.135678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.135693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.148756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.148770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.162766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.162781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.175899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.175914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.188561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.188575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.202963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.202978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.216203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.216218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.230761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.230776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.243725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.243739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.054 [2024-11-26 19:24:16.256426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.054 [2024-11-26 19:24:16.256440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.270871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.270886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.283740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.283755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.296314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.296328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.310772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.310787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.324125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.324140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.338748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.338762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.351866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.351881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.364530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.364545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.379678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.379693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.392487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.392502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.407331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.407346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.420818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.420833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.435029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.435044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.447950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.447964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.463310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.463325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.476517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.476531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.491100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.491114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.504332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.504347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.316 [2024-11-26 19:24:16.519251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.316 [2024-11-26 19:24:16.519265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.532442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.532457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.546803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.546819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.559805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.559820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.572646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.572661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.587385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.587400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.600805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.600820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.614923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.614937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.627986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.628000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.643155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.643175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.655582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.655597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.669070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.669083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.683246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.683261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.696349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.696364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.710779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.710794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.723885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.723899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.736903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.736917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.751017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.751031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.764038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.764051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.578 [2024-11-26 19:24:16.779076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.578 [2024-11-26 19:24:16.779091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.792102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.792116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.807241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.807256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.820271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.820285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.835290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.835308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.848139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.848154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.862806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.862821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.876103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.876117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.890611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.890625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.904004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.904017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.919001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.919016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.932082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.932097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.946958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.946973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.959591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.959606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.972378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.972392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:16.987155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:16.987173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:17.000095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:17.000109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:17.014922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:17.014937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:17.028284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:17.028298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.839 [2024-11-26 19:24:17.042641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.839 [2024-11-26 19:24:17.042656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.055644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.055660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.068589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.068603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.082921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.082935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.095989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.096006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.110712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.110726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.123655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.123669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 19036.75 IOPS, 148.72 MiB/s [2024-11-26T18:24:17.314Z] [2024-11-26 19:24:17.136696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.136711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.150847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.150862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.163991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.164004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.178839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.178853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.191625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.191639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.204652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.204666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.219256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.219271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.232398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.232412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.246725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.246739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.259787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.259801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.273372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.273386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.287359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.287374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.101 [2024-11-26 19:24:17.300104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.101 [2024-11-26 19:24:17.300118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.314253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.314268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.327426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.327440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.340673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.340687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.355512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.355530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.368819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.368833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.382685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.382699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.395765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.395780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.408909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.408923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.423451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.423466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.436534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.436548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.450899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.450914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.463871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.463886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.476162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.476176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.490851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.490865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.503896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.503911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.517121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.517136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.531036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.531050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.544335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.544349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.362 [2024-11-26 19:24:17.558651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.362 [2024-11-26 19:24:17.558665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.571879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.571895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.584851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.584865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.599200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.599214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.612506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.612520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.626624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.626639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.639564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.639578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.652973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.652987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.667043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.667058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.679952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.679966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.692873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.692887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.706979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.706994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.720154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.720173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.734534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.734550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.747501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.747516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.760306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.760321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.774878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.774893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.788204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.788219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.802909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.802923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.816098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.816112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.624 [2024-11-26 19:24:17.830590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.624 [2024-11-26 19:24:17.830605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.843392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.843408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.856551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.856565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.871060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.871076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.884090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.884105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.898952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.898966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.912037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.912051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.926957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.926971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.940325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.940339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.955184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.955199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.968511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.968526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.983258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.983273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:17.996506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:17.996521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.011078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.011093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.024141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.024156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.038810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.038824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.051857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.051872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.064806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.064821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.079118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.079133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.885 [2024-11-26 19:24:18.092382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.885 [2024-11-26 19:24:18.092397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.106839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.106854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.119945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.119961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 19036.80 IOPS, 148.72 MiB/s [2024-11-26T18:24:18.359Z] [2024-11-26 19:24:18.132675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.132691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 00:34:01.146 Latency(us) 00:34:01.146 [2024-11-26T18:24:18.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.146 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:01.146 Nvme1n1 : 5.01 19038.55 148.74 0.00 0.00 6717.17 2375.68 11250.35 00:34:01.146 [2024-11-26T18:24:18.359Z] =================================================================================================================== 00:34:01.146 [2024-11-26T18:24:18.359Z] Total : 19038.55 148.74 0.00 0.00 6717.17 2375.68 11250.35 00:34:01.146 [2024-11-26 19:24:18.143682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.143696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.155685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.155699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.167684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.167697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.179682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.179696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.191681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.191691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.203678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.203687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.215678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.215686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.227678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.227689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 [2024-11-26 19:24:18.239677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.146 [2024-11-26 19:24:18.239686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3204183) - No such process 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3204183 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.146 delay0 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.146 19:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:01.407 [2024-11-26 19:24:18.362107] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:09.544 Initializing NVMe Controllers 00:34:09.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:09.544 Initialization complete. Launching workers. 00:34:09.544 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 8673 00:34:09.544 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8898, failed to submit 67 00:34:09.544 success 8774, unsuccessful 124, failed 0 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.544 rmmod nvme_tcp 00:34:09.544 rmmod nvme_fabrics 00:34:09.544 rmmod nvme_keyring 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3201226 ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3201226 ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201226' 00:34:09.544 killing process with pid 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3201226 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.544 19:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.929 00:34:10.929 real 0m34.450s 00:34:10.929 user 0m43.799s 00:34:10.929 sys 0m12.769s 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.929 ************************************ 00:34:10.929 END TEST nvmf_zcopy 00:34:10.929 ************************************ 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:10.929 ************************************ 00:34:10.929 START TEST nvmf_nmic 00:34:10.929 ************************************ 00:34:10.929 19:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:10.929 * Looking for test storage... 00:34:10.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:10.929 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:11.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.191 --rc genhtml_branch_coverage=1 00:34:11.191 --rc genhtml_function_coverage=1 00:34:11.191 --rc genhtml_legend=1 00:34:11.191 --rc geninfo_all_blocks=1 00:34:11.191 --rc geninfo_unexecuted_blocks=1 00:34:11.191 00:34:11.191 ' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:11.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.191 --rc genhtml_branch_coverage=1 00:34:11.191 --rc genhtml_function_coverage=1 00:34:11.191 --rc genhtml_legend=1 00:34:11.191 --rc geninfo_all_blocks=1 00:34:11.191 --rc geninfo_unexecuted_blocks=1 00:34:11.191 00:34:11.191 ' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:11.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.191 --rc genhtml_branch_coverage=1 00:34:11.191 --rc genhtml_function_coverage=1 00:34:11.191 --rc genhtml_legend=1 00:34:11.191 --rc geninfo_all_blocks=1 00:34:11.191 --rc geninfo_unexecuted_blocks=1 00:34:11.191 00:34:11.191 ' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:11.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.191 --rc genhtml_branch_coverage=1 00:34:11.191 --rc genhtml_function_coverage=1 00:34:11.191 --rc genhtml_legend=1 00:34:11.191 --rc geninfo_all_blocks=1 00:34:11.191 --rc geninfo_unexecuted_blocks=1 00:34:11.191 00:34:11.191 ' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.191 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.192 19:24:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:19.328 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:19.329 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:19.329 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:19.329 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:19.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:34:19.329 00:34:19.329 --- 10.0.0.2 ping statistics --- 00:34:19.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.329 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:34:19.329 00:34:19.329 --- 10.0.0.1 ping statistics --- 00:34:19.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.329 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.329 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3210696 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3210696 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3210696 ']' 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.330 19:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.330 [2024-11-26 19:24:35.733306] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.330 [2024-11-26 19:24:35.734418] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:34:19.330 [2024-11-26 19:24:35.734468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.330 [2024-11-26 19:24:35.834009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.330 [2024-11-26 19:24:35.889364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.330 [2024-11-26 19:24:35.889417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.330 [2024-11-26 19:24:35.889425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.330 [2024-11-26 19:24:35.889432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.330 [2024-11-26 19:24:35.889439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.330 [2024-11-26 19:24:35.891824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.330 [2024-11-26 19:24:35.891986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.330 [2024-11-26 19:24:35.892146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.330 [2024-11-26 19:24:35.892146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.330 [2024-11-26 19:24:35.970980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.330 [2024-11-26 19:24:35.971966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.330 [2024-11-26 19:24:35.972175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:19.330 [2024-11-26 19:24:35.972704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.330 [2024-11-26 19:24:35.972719] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:19.330 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.330 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:19.330 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.330 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.330 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.591 [2024-11-26 19:24:36.585030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.591 Malloc0 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.591 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.592 [2024-11-26 19:24:36.677373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:19.592 test case1: single bdev can't be used in multiple subsystems 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.592 [2024-11-26 19:24:36.712648] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:19.592 [2024-11-26 19:24:36.712677] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:19.592 [2024-11-26 19:24:36.712686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.592 request: 00:34:19.592 { 00:34:19.592 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:19.592 "namespace": { 00:34:19.592 "bdev_name": "Malloc0", 00:34:19.592 "no_auto_visible": false, 00:34:19.592 "hide_metadata": false 00:34:19.592 }, 00:34:19.592 "method": "nvmf_subsystem_add_ns", 00:34:19.592 "req_id": 1 00:34:19.592 } 00:34:19.592 Got JSON-RPC error response 00:34:19.592 response: 00:34:19.592 { 00:34:19.592 "code": -32602, 00:34:19.592 "message": "Invalid parameters" 00:34:19.592 } 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:19.592 Adding namespace failed - expected result. 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:19.592 test case2: host connect to nvmf target in multiple paths 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.592 [2024-11-26 19:24:36.724811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.592 19:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:20.163 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:20.424 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:20.424 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:20.424 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:20.424 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:20.424 19:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:22.969 19:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:22.969 [global] 00:34:22.969 thread=1 00:34:22.969 invalidate=1 00:34:22.969 rw=write 00:34:22.969 time_based=1 00:34:22.969 runtime=1 00:34:22.969 ioengine=libaio 00:34:22.969 direct=1 00:34:22.969 bs=4096 00:34:22.969 iodepth=1 00:34:22.969 norandommap=0 00:34:22.969 numjobs=1 00:34:22.969 00:34:22.969 verify_dump=1 00:34:22.969 verify_backlog=512 00:34:22.969 verify_state_save=0 00:34:22.969 do_verify=1 00:34:22.969 verify=crc32c-intel 00:34:22.969 [job0] 00:34:22.969 filename=/dev/nvme0n1 00:34:22.969 Could not set queue depth (nvme0n1) 00:34:22.969 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:22.969 fio-3.35 00:34:22.969 Starting 1 thread 00:34:24.353 00:34:24.353 job0: (groupid=0, jobs=1): err= 0: pid=3211751: Tue Nov 26 19:24:41 2024 00:34:24.353 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:24.353 slat (nsec): min=6950, max=58361, avg=26574.03, stdev=4050.56 00:34:24.353 clat (usec): min=790, max=1237, avg=1037.83, stdev=72.06 00:34:24.353 lat (usec): min=798, max=1263, avg=1064.40, stdev=72.10 00:34:24.353 clat percentiles (usec): 00:34:24.353 | 1.00th=[ 857], 5.00th=[ 914], 10.00th=[ 947], 20.00th=[ 988], 00:34:24.353 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:34:24.353 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1139], 00:34:24.353 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:24.353 | 99.99th=[ 1237] 00:34:24.353 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:34:24.353 slat (nsec): min=9367, max=70656, avg=29322.46, stdev=10024.57 00:34:24.353 clat (usec): min=234, max=846, avg=584.87, stdev=98.28 00:34:24.353 lat (usec): min=245, max=899, avg=614.19, stdev=103.19 00:34:24.353 clat percentiles (usec): 00:34:24.353 | 1.00th=[ 355], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 502], 00:34:24.353 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 627], 00:34:24.353 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:34:24.353 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 848], 99.95th=[ 848], 00:34:24.353 | 99.99th=[ 848] 00:34:24.353 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.353 lat (usec) : 250=0.08%, 500=11.55%, 750=45.80%, 1000=12.36% 00:34:24.353 lat (msec) : 2=30.21% 00:34:24.353 cpu : usr=2.00%, sys=3.40%, ctx=1238, majf=0, minf=1 00:34:24.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.353 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.353 00:34:24.353 Run status group 0 (all jobs): 00:34:24.353 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:34:24.353 WRITE: bw=2901KiB/s (2971kB/s), 2901KiB/s-2901KiB/s (2971kB/s-2971kB/s), io=2904KiB (2974kB), run=1001-1001msec 00:34:24.353 00:34:24.353 Disk stats (read/write): 00:34:24.353 nvme0n1: ios=562/560, merge=0/0, ticks=571/314, in_queue=885, util=93.69% 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:24.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.353 rmmod nvme_tcp 00:34:24.353 rmmod nvme_fabrics 00:34:24.353 rmmod nvme_keyring 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3210696 ']' 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3210696 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3210696 ']' 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3210696 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3210696 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3210696' 00:34:24.353 killing process with pid 3210696 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3210696 00:34:24.353 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3210696 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.614 19:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.524 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.524 00:34:26.524 real 0m15.737s 00:34:26.524 user 0m34.709s 00:34:26.524 sys 0m7.393s 00:34:26.524 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.524 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.524 ************************************ 00:34:26.524 END TEST nvmf_nmic 00:34:26.524 ************************************ 00:34:26.524 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:26.524 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.525 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.525 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:26.784 ************************************ 00:34:26.784 START TEST nvmf_fio_target 00:34:26.784 ************************************ 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:26.784 * Looking for test storage... 00:34:26.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.784 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:26.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.785 --rc genhtml_branch_coverage=1 00:34:26.785 --rc genhtml_function_coverage=1 00:34:26.785 --rc genhtml_legend=1 00:34:26.785 --rc geninfo_all_blocks=1 00:34:26.785 --rc geninfo_unexecuted_blocks=1 00:34:26.785 00:34:26.785 ' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:26.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.785 --rc genhtml_branch_coverage=1 00:34:26.785 --rc genhtml_function_coverage=1 00:34:26.785 --rc genhtml_legend=1 00:34:26.785 --rc geninfo_all_blocks=1 00:34:26.785 --rc geninfo_unexecuted_blocks=1 00:34:26.785 00:34:26.785 ' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:26.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.785 --rc genhtml_branch_coverage=1 00:34:26.785 --rc genhtml_function_coverage=1 00:34:26.785 --rc genhtml_legend=1 00:34:26.785 --rc geninfo_all_blocks=1 00:34:26.785 --rc geninfo_unexecuted_blocks=1 00:34:26.785 00:34:26.785 ' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:26.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.785 --rc genhtml_branch_coverage=1 00:34:26.785 --rc genhtml_function_coverage=1 00:34:26.785 --rc genhtml_legend=1 00:34:26.785 --rc geninfo_all_blocks=1 00:34:26.785 --rc geninfo_unexecuted_blocks=1 00:34:26.785 00:34:26.785 ' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.785 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:27.102 19:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.102 19:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:35.241 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:35.241 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.241 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:35.242 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:35.242 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:34:35.242 00:34:35.242 --- 10.0.0.2 ping statistics --- 00:34:35.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.242 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:34:35.242 00:34:35.242 --- 10.0.0.1 ping statistics --- 00:34:35.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.242 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3216229 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3216229 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:35.242 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3216229 ']' 00:34:35.243 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.243 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.243 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.243 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.243 19:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.243 [2024-11-26 19:24:51.558047] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.243 [2024-11-26 19:24:51.559177] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:34:35.243 [2024-11-26 19:24:51.559233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.243 [2024-11-26 19:24:51.660593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.243 [2024-11-26 19:24:51.713807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.243 [2024-11-26 19:24:51.713866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.243 [2024-11-26 19:24:51.713874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.243 [2024-11-26 19:24:51.713882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.243 [2024-11-26 19:24:51.713888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.243 [2024-11-26 19:24:51.715932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.243 [2024-11-26 19:24:51.716094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.243 [2024-11-26 19:24:51.716261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.243 [2024-11-26 19:24:51.716449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.243 [2024-11-26 19:24:51.794880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.243 [2024-11-26 19:24:51.795793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.243 [2024-11-26 19:24:51.796010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:35.243 [2024-11-26 19:24:51.796562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:35.243 [2024-11-26 19:24:51.796592] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.243 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:35.504 [2024-11-26 19:24:52.605402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.504 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:35.766 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:35.766 19:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.026 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:36.026 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.288 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:36.288 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.548 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:36.548 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:36.549 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.809 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:36.809 19:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.070 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:37.070 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.070 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:37.070 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:37.332 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:37.594 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:37.594 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:37.594 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:37.594 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:37.855 19:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.117 [2024-11-26 19:24:55.133358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.117 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:38.379 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:38.379 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:38.951 19:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:40.868 19:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:40.868 [global] 00:34:40.868 thread=1 00:34:40.868 invalidate=1 00:34:40.868 rw=write 00:34:40.868 time_based=1 00:34:40.868 runtime=1 00:34:40.868 ioengine=libaio 00:34:40.868 direct=1 00:34:40.868 bs=4096 00:34:40.868 iodepth=1 00:34:40.868 norandommap=0 00:34:40.868 numjobs=1 00:34:40.868 00:34:40.868 verify_dump=1 00:34:40.868 verify_backlog=512 00:34:40.868 verify_state_save=0 00:34:40.868 do_verify=1 00:34:40.868 verify=crc32c-intel 00:34:40.868 [job0] 00:34:40.868 filename=/dev/nvme0n1 00:34:40.868 [job1] 00:34:40.868 filename=/dev/nvme0n2 00:34:40.868 [job2] 00:34:40.868 filename=/dev/nvme0n3 00:34:40.868 [job3] 00:34:40.868 filename=/dev/nvme0n4 00:34:41.151 Could not set queue depth (nvme0n1) 00:34:41.151 Could not set queue depth (nvme0n2) 00:34:41.151 Could not set queue depth (nvme0n3) 00:34:41.151 Could not set queue depth (nvme0n4) 00:34:41.414 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.414 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.414 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.414 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.414 fio-3.35 00:34:41.414 Starting 4 threads 00:34:42.819 00:34:42.819 job0: (groupid=0, jobs=1): err= 0: pid=3217655: Tue Nov 26 19:24:59 2024 00:34:42.819 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:42.819 slat (nsec): min=6925, max=56013, avg=26134.52, stdev=5192.48 00:34:42.819 clat (usec): min=636, max=1321, avg=1039.88, stdev=100.53 00:34:42.819 lat (usec): min=646, max=1348, avg=1066.01, stdev=101.63 00:34:42.819 clat percentiles (usec): 00:34:42.819 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 971], 00:34:42.819 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:34:42.819 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:34:42.819 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:42.819 | 99.99th=[ 1319] 00:34:42.819 write: IOPS=771, BW=3085KiB/s (3159kB/s)(3088KiB/1001msec); 0 zone resets 00:34:42.819 slat (nsec): min=9786, max=56466, avg=25056.26, stdev=11937.71 00:34:42.819 clat (usec): min=234, max=979, avg=552.17, stdev=135.12 00:34:42.819 lat (usec): min=266, max=1014, avg=577.23, stdev=138.15 00:34:42.819 clat percentiles (usec): 00:34:42.819 | 1.00th=[ 277], 5.00th=[ 359], 10.00th=[ 388], 20.00th=[ 441], 00:34:42.819 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 578], 00:34:42.819 | 70.00th=[ 611], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 799], 00:34:42.819 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 979], 99.95th=[ 979], 00:34:42.819 | 99.99th=[ 979] 00:34:42.820 bw ( KiB/s): min= 4087, max= 4087, per=33.60%, avg=4087.00, stdev= 0.00, samples=1 00:34:42.820 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:34:42.820 lat (usec) : 250=0.08%, 500=24.07%, 750=31.15%, 1000=16.51% 00:34:42.820 lat (msec) : 2=28.19% 00:34:42.820 cpu : usr=1.50%, sys=3.60%, ctx=1285, majf=0, minf=1 00:34:42.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 issued rwts: total=512,772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.820 job1: (groupid=0, jobs=1): err= 0: pid=3217665: Tue Nov 26 19:24:59 2024 00:34:42.820 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:42.820 slat (nsec): min=6769, max=47546, avg=25544.08, stdev=6504.00 00:34:42.820 clat (usec): min=298, max=1022, avg=773.75, stdev=113.56 00:34:42.820 lat (usec): min=305, max=1065, avg=799.29, stdev=114.83 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 490], 5.00th=[ 562], 10.00th=[ 619], 20.00th=[ 685], 00:34:42.820 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 824], 00:34:42.820 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 930], 00:34:42.820 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:42.820 | 99.99th=[ 1020] 00:34:42.820 write: IOPS=1006, BW=4028KiB/s (4125kB/s)(4032KiB/1001msec); 0 zone resets 00:34:42.820 slat (nsec): min=9963, max=68117, avg=33029.15, stdev=8246.77 00:34:42.820 clat (usec): min=132, max=999, avg=541.56, stdev=134.40 00:34:42.820 lat (usec): min=166, max=1035, avg=574.59, stdev=136.17 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 243], 5.00th=[ 302], 10.00th=[ 363], 20.00th=[ 420], 00:34:42.820 | 30.00th=[ 482], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 578], 00:34:42.820 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 750], 00:34:42.820 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 996], 00:34:42.820 | 99.99th=[ 996] 00:34:42.820 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.820 lat (usec) : 250=0.92%, 500=23.22%, 750=52.76%, 1000=22.96% 00:34:42.820 lat (msec) : 2=0.13% 00:34:42.820 cpu : usr=2.20%, sys=4.80%, ctx=1521, majf=0, minf=1 00:34:42.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 issued rwts: total=512,1008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.820 job2: (groupid=0, jobs=1): err= 0: pid=3217684: Tue Nov 26 19:24:59 2024 00:34:42.820 read: IOPS=145, BW=583KiB/s (597kB/s)(584KiB/1001msec) 00:34:42.820 slat (nsec): min=25961, max=45129, avg=27283.16, stdev=2177.64 00:34:42.820 clat (usec): min=367, max=42135, avg=4845.63, stdev=12076.11 00:34:42.820 lat (usec): min=395, max=42161, avg=4872.91, stdev=12075.74 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 392], 5.00th=[ 449], 10.00th=[ 553], 20.00th=[ 766], 00:34:42.820 | 30.00th=[ 840], 40.00th=[ 914], 50.00th=[ 996], 60.00th=[ 1057], 00:34:42.820 | 70.00th=[ 1139], 80.00th=[ 1221], 90.00th=[ 1450], 95.00th=[41681], 00:34:42.820 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:42.820 | 99.99th=[42206] 00:34:42.820 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:42.820 slat (nsec): min=9813, max=56291, avg=33920.08, stdev=8530.75 00:34:42.820 clat (usec): min=137, max=1022, avg=520.22, stdev=189.72 00:34:42.820 lat (usec): min=148, max=1057, avg=554.14, stdev=192.87 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 165], 5.00th=[ 235], 10.00th=[ 281], 20.00th=[ 326], 00:34:42.820 | 30.00th=[ 396], 40.00th=[ 474], 50.00th=[ 523], 60.00th=[ 578], 00:34:42.820 | 70.00th=[ 635], 80.00th=[ 693], 90.00th=[ 775], 95.00th=[ 832], 00:34:42.820 | 99.00th=[ 914], 99.50th=[ 988], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:42.820 | 99.99th=[ 1020] 00:34:42.820 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.820 lat (usec) : 250=4.56%, 500=32.83%, 750=33.74%, 1000=17.63% 00:34:42.820 lat (msec) : 2=9.12%, 50=2.13% 00:34:42.820 cpu : usr=1.30%, sys=1.80%, ctx=659, majf=0, minf=1 00:34:42.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 issued rwts: total=146,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.820 job3: (groupid=0, jobs=1): err= 0: pid=3217690: Tue Nov 26 19:24:59 2024 00:34:42.820 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:42.820 slat (nsec): min=7463, max=59930, avg=26885.71, stdev=3564.57 00:34:42.820 clat (usec): min=459, max=1382, avg=1010.48, stdev=147.71 00:34:42.820 lat (usec): min=487, max=1409, avg=1037.37, stdev=148.06 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 619], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 898], 00:34:42.820 | 30.00th=[ 947], 40.00th=[ 988], 50.00th=[ 1029], 60.00th=[ 1057], 00:34:42.820 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1221], 00:34:42.820 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1385], 99.95th=[ 1385], 00:34:42.820 | 99.99th=[ 1385] 00:34:42.820 write: IOPS=751, BW=3005KiB/s (3077kB/s)(3008KiB/1001msec); 0 zone resets 00:34:42.820 slat (nsec): min=9620, max=59119, avg=32162.69, stdev=9153.68 00:34:42.820 clat (usec): min=225, max=977, avg=578.50, stdev=139.65 00:34:42.820 lat (usec): min=237, max=1012, avg=610.66, stdev=142.92 00:34:42.820 clat percentiles (usec): 00:34:42.820 | 1.00th=[ 285], 5.00th=[ 363], 10.00th=[ 400], 20.00th=[ 461], 00:34:42.820 | 30.00th=[ 502], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 611], 00:34:42.820 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 824], 00:34:42.820 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 979], 99.95th=[ 979], 00:34:42.820 | 99.99th=[ 979] 00:34:42.820 bw ( KiB/s): min= 4096, max= 4096, per=33.67%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.820 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.820 lat (usec) : 250=0.16%, 500=17.72%, 750=36.47%, 1000=22.23% 00:34:42.820 lat (msec) : 2=23.42% 00:34:42.820 cpu : usr=2.00%, sys=3.80%, ctx=1265, majf=0, minf=1 00:34:42.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.820 issued rwts: total=512,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.820 00:34:42.820 Run status group 0 (all jobs): 00:34:42.820 READ: bw=6721KiB/s (6883kB/s), 583KiB/s-2046KiB/s (597kB/s-2095kB/s), io=6728KiB (6889kB), run=1001-1001msec 00:34:42.820 WRITE: bw=11.9MiB/s (12.5MB/s), 2046KiB/s-4028KiB/s (2095kB/s-4125kB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:34:42.820 00:34:42.820 Disk stats (read/write): 00:34:42.820 nvme0n1: ios=537/520, merge=0/0, ticks=1468/270, in_queue=1738, util=96.49% 00:34:42.820 nvme0n2: ios=535/707, merge=0/0, ticks=1331/363, in_queue=1694, util=96.83% 00:34:42.820 nvme0n3: ios=116/512, merge=0/0, ticks=1447/252, in_queue=1699, util=96.72% 00:34:42.820 nvme0n4: ios=557/512, merge=0/0, ticks=1325/278, in_queue=1603, util=96.68% 00:34:42.820 19:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:42.820 [global] 00:34:42.820 thread=1 00:34:42.820 invalidate=1 00:34:42.820 rw=randwrite 00:34:42.820 time_based=1 00:34:42.820 runtime=1 00:34:42.820 ioengine=libaio 00:34:42.820 direct=1 00:34:42.820 bs=4096 00:34:42.820 iodepth=1 00:34:42.820 norandommap=0 00:34:42.820 numjobs=1 00:34:42.820 00:34:42.820 verify_dump=1 00:34:42.820 verify_backlog=512 00:34:42.820 verify_state_save=0 00:34:42.820 do_verify=1 00:34:42.820 verify=crc32c-intel 00:34:42.820 [job0] 00:34:42.820 filename=/dev/nvme0n1 00:34:42.820 [job1] 00:34:42.820 filename=/dev/nvme0n2 00:34:42.820 [job2] 00:34:42.820 filename=/dev/nvme0n3 00:34:42.820 [job3] 00:34:42.820 filename=/dev/nvme0n4 00:34:42.820 Could not set queue depth (nvme0n1) 00:34:42.820 Could not set queue depth (nvme0n2) 00:34:42.820 Could not set queue depth (nvme0n3) 00:34:42.820 Could not set queue depth (nvme0n4) 00:34:43.085 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.085 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.085 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.085 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.085 fio-3.35 00:34:43.085 Starting 4 threads 00:34:44.615 00:34:44.615 job0: (groupid=0, jobs=1): err= 0: pid=3218096: Tue Nov 26 19:25:01 2024 00:34:44.615 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:44.615 slat (nsec): min=7735, max=50719, avg=26993.25, stdev=2855.13 00:34:44.615 clat (usec): min=809, max=1328, avg=1096.35, stdev=78.97 00:34:44.615 lat (usec): min=836, max=1376, avg=1123.35, stdev=79.08 00:34:44.615 clat percentiles (usec): 00:34:44.615 | 1.00th=[ 881], 5.00th=[ 955], 10.00th=[ 1004], 20.00th=[ 1045], 00:34:44.615 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:34:44.615 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:34:44.615 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:34:44.615 | 99.99th=[ 1336] 00:34:44.615 write: IOPS=621, BW=2486KiB/s (2545kB/s)(2488KiB/1001msec); 0 zone resets 00:34:44.615 slat (nsec): min=8831, max=61770, avg=28842.13, stdev=9504.20 00:34:44.615 clat (usec): min=244, max=942, avg=639.81, stdev=112.18 00:34:44.615 lat (usec): min=254, max=978, avg=668.65, stdev=116.71 00:34:44.615 clat percentiles (usec): 00:34:44.615 | 1.00th=[ 363], 5.00th=[ 449], 10.00th=[ 486], 20.00th=[ 545], 00:34:44.615 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 676], 00:34:44.615 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 799], 00:34:44.615 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 947], 99.95th=[ 947], 00:34:44.615 | 99.99th=[ 947] 00:34:44.615 bw ( KiB/s): min= 4096, max= 4096, per=48.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.615 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.615 lat (usec) : 250=0.09%, 500=7.23%, 750=38.62%, 1000=13.23% 00:34:44.615 lat (msec) : 2=40.83% 00:34:44.615 cpu : usr=3.00%, sys=3.70%, ctx=1134, majf=0, minf=1 00:34:44.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.615 issued rwts: total=512,622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.615 job1: (groupid=0, jobs=1): err= 0: pid=3218113: Tue Nov 26 19:25:01 2024 00:34:44.615 read: IOPS=232, BW=929KiB/s (952kB/s)(948KiB/1020msec) 00:34:44.615 slat (nsec): min=25126, max=43928, avg=26694.46, stdev=2833.84 00:34:44.615 clat (usec): min=608, max=42172, avg=2848.81, stdev=8084.65 00:34:44.615 lat (usec): min=635, max=42199, avg=2875.51, stdev=8084.97 00:34:44.615 clat percentiles (usec): 00:34:44.615 | 1.00th=[ 725], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 1020], 00:34:44.615 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1221], 00:34:44.615 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1401], 95.00th=[ 1483], 00:34:44.615 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:44.615 | 99.99th=[42206] 00:34:44.615 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:34:44.616 slat (nsec): min=9803, max=63694, avg=30789.94, stdev=8250.10 00:34:44.616 clat (usec): min=269, max=1016, avg=618.10, stdev=141.32 00:34:44.616 lat (usec): min=285, max=1049, avg=648.89, stdev=143.48 00:34:44.616 clat percentiles (usec): 00:34:44.616 | 1.00th=[ 318], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 502], 00:34:44.616 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:34:44.616 | 70.00th=[ 693], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:34:44.616 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:44.616 | 99.99th=[ 1020] 00:34:44.616 bw ( KiB/s): min= 4096, max= 4096, per=48.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.616 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.616 lat (usec) : 500=13.48%, 750=42.06%, 1000=18.42% 00:34:44.616 lat (msec) : 2=24.70%, 50=1.34% 00:34:44.616 cpu : usr=1.18%, sys=2.16%, ctx=753, majf=0, minf=1 00:34:44.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 issued rwts: total=237,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.616 job2: (groupid=0, jobs=1): err= 0: pid=3218133: Tue Nov 26 19:25:01 2024 00:34:44.616 read: IOPS=463, BW=1854KiB/s (1899kB/s)(1856KiB/1001msec) 00:34:44.616 slat (nsec): min=13475, max=60447, avg=25813.13, stdev=3182.21 00:34:44.616 clat (usec): min=865, max=41356, avg=1300.89, stdev=1866.08 00:34:44.616 lat (usec): min=890, max=41382, avg=1326.71, stdev=1866.06 00:34:44.616 clat percentiles (usec): 00:34:44.616 | 1.00th=[ 930], 5.00th=[ 1037], 10.00th=[ 1090], 20.00th=[ 1156], 00:34:44.616 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1237], 00:34:44.616 | 70.00th=[ 1270], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1369], 00:34:44.616 | 99.00th=[ 1434], 99.50th=[ 1483], 99.90th=[41157], 99.95th=[41157], 00:34:44.616 | 99.99th=[41157] 00:34:44.616 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:44.616 slat (nsec): min=9449, max=68020, avg=31756.87, stdev=6121.62 00:34:44.616 clat (usec): min=218, max=1171, avg=703.99, stdev=186.54 00:34:44.616 lat (usec): min=250, max=1204, avg=735.75, stdev=187.73 00:34:44.616 clat percentiles (usec): 00:34:44.616 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 545], 00:34:44.616 | 30.00th=[ 594], 40.00th=[ 652], 50.00th=[ 693], 60.00th=[ 742], 00:34:44.616 | 70.00th=[ 807], 80.00th=[ 898], 90.00th=[ 963], 95.00th=[ 1012], 00:34:44.616 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:34:44.616 | 99.99th=[ 1172] 00:34:44.616 bw ( KiB/s): min= 4096, max= 4096, per=48.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.616 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.616 lat (usec) : 250=0.10%, 500=6.86%, 750=25.10%, 1000=18.75% 00:34:44.616 lat (msec) : 2=49.08%, 50=0.10% 00:34:44.616 cpu : usr=1.80%, sys=2.70%, ctx=976, majf=0, minf=1 00:34:44.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 issued rwts: total=464,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.616 job3: (groupid=0, jobs=1): err= 0: pid=3218141: Tue Nov 26 19:25:01 2024 00:34:44.616 read: IOPS=28, BW=114KiB/s (116kB/s)(116KiB/1021msec) 00:34:44.616 slat (nsec): min=24848, max=46249, avg=25843.21, stdev=3928.50 00:34:44.616 clat (usec): min=625, max=41803, avg=25844.19, stdev=19861.94 00:34:44.616 lat (usec): min=650, max=41828, avg=25870.03, stdev=19862.52 00:34:44.616 clat percentiles (usec): 00:34:44.616 | 1.00th=[ 627], 5.00th=[ 660], 10.00th=[ 725], 20.00th=[ 930], 00:34:44.616 | 30.00th=[ 996], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:34:44.616 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:44.616 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:44.616 | 99.99th=[41681] 00:34:44.616 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:44.616 slat (nsec): min=9203, max=79583, avg=30874.58, stdev=6066.54 00:34:44.616 clat (usec): min=152, max=1042, avg=489.78, stdev=147.43 00:34:44.616 lat (usec): min=161, max=1073, avg=520.66, stdev=148.10 00:34:44.616 clat percentiles (usec): 00:34:44.616 | 1.00th=[ 243], 5.00th=[ 273], 10.00th=[ 310], 20.00th=[ 343], 00:34:44.616 | 30.00th=[ 383], 40.00th=[ 437], 50.00th=[ 478], 60.00th=[ 529], 00:34:44.616 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 693], 95.00th=[ 734], 00:34:44.616 | 99.00th=[ 799], 99.50th=[ 881], 99.90th=[ 1045], 99.95th=[ 1045], 00:34:44.616 | 99.99th=[ 1045] 00:34:44.616 bw ( KiB/s): min= 4096, max= 4096, per=48.45%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.616 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.616 lat (usec) : 250=1.85%, 500=49.72%, 750=40.48%, 1000=4.07% 00:34:44.616 lat (msec) : 2=0.55%, 50=3.33% 00:34:44.616 cpu : usr=1.08%, sys=1.37%, ctx=542, majf=0, minf=1 00:34:44.616 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.616 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.616 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.616 00:34:44.616 Run status group 0 (all jobs): 00:34:44.616 READ: bw=4866KiB/s (4983kB/s), 114KiB/s-2046KiB/s (116kB/s-2095kB/s), io=4968KiB (5087kB), run=1001-1021msec 00:34:44.616 WRITE: bw=8454KiB/s (8657kB/s), 2006KiB/s-2486KiB/s (2054kB/s-2545kB/s), io=8632KiB (8839kB), run=1001-1021msec 00:34:44.616 00:34:44.616 Disk stats (read/write): 00:34:44.616 nvme0n1: ios=477/512, merge=0/0, ticks=525/275, in_queue=800, util=91.38% 00:34:44.616 nvme0n2: ios=239/512, merge=0/0, ticks=1121/306, in_queue=1427, util=96.73% 00:34:44.616 nvme0n3: ios=330/512, merge=0/0, ticks=431/346, in_queue=777, util=88.36% 00:34:44.616 nvme0n4: ios=24/512, merge=0/0, ticks=546/232, in_queue=778, util=89.50% 00:34:44.616 19:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:44.616 [global] 00:34:44.616 thread=1 00:34:44.616 invalidate=1 00:34:44.616 rw=write 00:34:44.616 time_based=1 00:34:44.616 runtime=1 00:34:44.616 ioengine=libaio 00:34:44.616 direct=1 00:34:44.616 bs=4096 00:34:44.616 iodepth=128 00:34:44.616 norandommap=0 00:34:44.616 numjobs=1 00:34:44.616 00:34:44.616 verify_dump=1 00:34:44.616 verify_backlog=512 00:34:44.616 verify_state_save=0 00:34:44.616 do_verify=1 00:34:44.616 verify=crc32c-intel 00:34:44.616 [job0] 00:34:44.616 filename=/dev/nvme0n1 00:34:44.616 [job1] 00:34:44.616 filename=/dev/nvme0n2 00:34:44.616 [job2] 00:34:44.616 filename=/dev/nvme0n3 00:34:44.616 [job3] 00:34:44.616 filename=/dev/nvme0n4 00:34:44.616 Could not set queue depth (nvme0n1) 00:34:44.616 Could not set queue depth (nvme0n2) 00:34:44.616 Could not set queue depth (nvme0n3) 00:34:44.616 Could not set queue depth (nvme0n4) 00:34:44.616 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.616 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.616 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.616 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.616 fio-3.35 00:34:44.616 Starting 4 threads 00:34:46.020 00:34:46.020 job0: (groupid=0, jobs=1): err= 0: pid=3218556: Tue Nov 26 19:25:02 2024 00:34:46.020 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:34:46.020 slat (nsec): min=960, max=11855k, avg=104594.00, stdev=763637.11 00:34:46.020 clat (usec): min=4481, max=52080, avg=13106.21, stdev=5651.27 00:34:46.020 lat (usec): min=4491, max=52088, avg=13210.80, stdev=5716.51 00:34:46.020 clat percentiles (usec): 00:34:46.020 | 1.00th=[ 6652], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8848], 00:34:46.020 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[11731], 60.00th=[13042], 00:34:46.020 | 70.00th=[14615], 80.00th=[17433], 90.00th=[20317], 95.00th=[23462], 00:34:46.020 | 99.00th=[30802], 99.50th=[42730], 99.90th=[52167], 99.95th=[52167], 00:34:46.020 | 99.99th=[52167] 00:34:46.020 write: IOPS=4145, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1007msec); 0 zone resets 00:34:46.021 slat (nsec): min=1656, max=16264k, avg=124429.16, stdev=723411.69 00:34:46.021 clat (usec): min=1224, max=78001, avg=17753.67, stdev=15475.77 00:34:46.021 lat (usec): min=1270, max=78012, avg=17878.10, stdev=15577.29 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 4228], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 6587], 00:34:46.021 | 30.00th=[ 8160], 40.00th=[ 8979], 50.00th=[11207], 60.00th=[14877], 00:34:46.021 | 70.00th=[19006], 80.00th=[28967], 90.00th=[35390], 95.00th=[56886], 00:34:46.021 | 99.00th=[71828], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:34:46.021 | 99.99th=[78119] 00:34:46.021 bw ( KiB/s): min=16080, max=16688, per=18.55%, avg=16384.00, stdev=429.92, samples=2 00:34:46.021 iops : min= 4020, max= 4172, avg=4096.00, stdev=107.48, samples=2 00:34:46.021 lat (msec) : 2=0.01%, 4=0.18%, 10=42.81%, 20=36.13%, 50=17.62% 00:34:46.021 lat (msec) : 100=3.25% 00:34:46.021 cpu : usr=3.48%, sys=4.97%, ctx=311, majf=0, minf=1 00:34:46.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:46.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.021 issued rwts: total=4096,4175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.021 job1: (groupid=0, jobs=1): err= 0: pid=3218563: Tue Nov 26 19:25:02 2024 00:34:46.021 read: IOPS=4458, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1008msec) 00:34:46.021 slat (nsec): min=916, max=24586k, avg=123348.88, stdev=993586.91 00:34:46.021 clat (usec): min=3502, max=82513, avg=15903.41, stdev=12240.16 00:34:46.021 lat (usec): min=3508, max=82542, avg=16026.75, stdev=12355.16 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 3785], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7177], 00:34:46.021 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[10683], 60.00th=[14353], 00:34:46.021 | 70.00th=[17171], 80.00th=[22152], 90.00th=[28443], 95.00th=[44303], 00:34:46.021 | 99.00th=[62653], 99.50th=[65799], 99.90th=[73925], 99.95th=[73925], 00:34:46.021 | 99.99th=[82314] 00:34:46.021 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:34:46.021 slat (nsec): min=1604, max=13407k, avg=88766.69, stdev=648430.73 00:34:46.021 clat (usec): min=1547, max=63569, avg=12237.93, stdev=9054.08 00:34:46.021 lat (usec): min=1556, max=63580, avg=12326.70, stdev=9112.14 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 3818], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 6325], 00:34:46.021 | 30.00th=[ 6849], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 9372], 00:34:46.021 | 70.00th=[14484], 80.00th=[17695], 90.00th=[20055], 95.00th=[25822], 00:34:46.021 | 99.00th=[55837], 99.50th=[58983], 99.90th=[63177], 99.95th=[63177], 00:34:46.021 | 99.99th=[63701] 00:34:46.021 bw ( KiB/s): min=16384, max=20480, per=20.87%, avg=18432.00, stdev=2896.31, samples=2 00:34:46.021 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:34:46.021 lat (msec) : 2=0.09%, 4=1.23%, 10=47.11%, 20=35.37%, 50=13.95% 00:34:46.021 lat (msec) : 100=2.25% 00:34:46.021 cpu : usr=3.67%, sys=5.26%, ctx=244, majf=0, minf=1 00:34:46.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:46.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.021 issued rwts: total=4494,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.021 job2: (groupid=0, jobs=1): err= 0: pid=3218587: Tue Nov 26 19:25:02 2024 00:34:46.021 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:34:46.021 slat (nsec): min=965, max=15874k, avg=127095.05, stdev=863502.63 00:34:46.021 clat (usec): min=3751, max=62927, avg=14881.75, stdev=9096.42 00:34:46.021 lat (usec): min=3757, max=62936, avg=15008.84, stdev=9173.74 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 5538], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8979], 00:34:46.021 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11731], 60.00th=[13435], 00:34:46.021 | 70.00th=[14877], 80.00th=[17433], 90.00th=[29492], 95.00th=[36439], 00:34:46.021 | 99.00th=[51643], 99.50th=[56361], 99.90th=[62653], 99.95th=[63177], 00:34:46.021 | 99.99th=[63177] 00:34:46.021 write: IOPS=3719, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1007msec); 0 zone resets 00:34:46.021 slat (nsec): min=1649, max=18318k, avg=134361.01, stdev=767531.58 00:34:46.021 clat (usec): min=1190, max=74235, avg=19878.10, stdev=15738.35 00:34:46.021 lat (usec): min=1202, max=74245, avg=20012.46, stdev=15843.07 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 7046], 20.00th=[ 8848], 00:34:46.021 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[13042], 60.00th=[17695], 00:34:46.021 | 70.00th=[20055], 80.00th=[30278], 90.00th=[44827], 95.00th=[58983], 00:34:46.021 | 99.00th=[69731], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:34:46.021 | 99.99th=[73925] 00:34:46.021 bw ( KiB/s): min=13744, max=15200, per=16.39%, avg=14472.00, stdev=1029.55, samples=2 00:34:46.021 iops : min= 3436, max= 3800, avg=3618.00, stdev=257.39, samples=2 00:34:46.021 lat (msec) : 2=0.03%, 4=0.20%, 10=34.83%, 20=41.42%, 50=19.28% 00:34:46.021 lat (msec) : 100=4.24% 00:34:46.021 cpu : usr=3.48%, sys=4.08%, ctx=311, majf=0, minf=3 00:34:46.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:46.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.021 issued rwts: total=3584,3746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.021 job3: (groupid=0, jobs=1): err= 0: pid=3218595: Tue Nov 26 19:25:02 2024 00:34:46.021 read: IOPS=9304, BW=36.3MiB/s (38.1MB/s)(36.5MiB/1004msec) 00:34:46.021 slat (nsec): min=947, max=7769.6k, avg=54265.15, stdev=412302.65 00:34:46.021 clat (usec): min=1236, max=16205, avg=7175.19, stdev=1915.02 00:34:46.021 lat (usec): min=2178, max=16208, avg=7229.46, stdev=1932.05 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 3523], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 5604], 00:34:46.021 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6783], 60.00th=[ 7177], 00:34:46.021 | 70.00th=[ 7832], 80.00th=[ 8455], 90.00th=[ 9765], 95.00th=[10683], 00:34:46.021 | 99.00th=[13435], 99.50th=[14746], 99.90th=[15533], 99.95th=[16188], 00:34:46.021 | 99.99th=[16188] 00:34:46.021 write: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec); 0 zone resets 00:34:46.021 slat (nsec): min=1655, max=5983.4k, avg=45542.23, stdev=332163.82 00:34:46.021 clat (usec): min=742, max=16209, avg=6200.40, stdev=1621.70 00:34:46.021 lat (usec): min=751, max=16211, avg=6245.94, stdev=1624.50 00:34:46.021 clat percentiles (usec): 00:34:46.021 | 1.00th=[ 2343], 5.00th=[ 3818], 10.00th=[ 4146], 20.00th=[ 5014], 00:34:46.021 | 30.00th=[ 5473], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6456], 00:34:46.021 | 70.00th=[ 6587], 80.00th=[ 6980], 90.00th=[ 8356], 95.00th=[ 9241], 00:34:46.021 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12125], 99.95th=[13304], 00:34:46.021 | 99.99th=[16188] 00:34:46.021 bw ( KiB/s): min=38832, max=38976, per=44.05%, avg=38904.00, stdev=101.82, samples=2 00:34:46.021 iops : min= 9708, max= 9744, avg=9726.00, stdev=25.46, samples=2 00:34:46.021 lat (usec) : 750=0.01%, 1000=0.01% 00:34:46.021 lat (msec) : 2=0.35%, 4=4.30%, 10=89.32%, 20=6.01% 00:34:46.021 cpu : usr=6.68%, sys=8.18%, ctx=618, majf=0, minf=1 00:34:46.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:34:46.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.021 issued rwts: total=9342,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.021 00:34:46.021 Run status group 0 (all jobs): 00:34:46.021 READ: bw=83.4MiB/s (87.4MB/s), 13.9MiB/s-36.3MiB/s (14.6MB/s-38.1MB/s), io=84.0MiB (88.1MB), run=1004-1008msec 00:34:46.021 WRITE: bw=86.3MiB/s (90.4MB/s), 14.5MiB/s-37.8MiB/s (15.2MB/s-39.7MB/s), io=86.9MiB (91.2MB), run=1004-1008msec 00:34:46.021 00:34:46.021 Disk stats (read/write): 00:34:46.021 nvme0n1: ios=3122/3246, merge=0/0, ticks=40383/60970, in_queue=101353, util=87.47% 00:34:46.021 nvme0n2: ios=4131/4151, merge=0/0, ticks=32818/25723, in_queue=58541, util=86.85% 00:34:46.021 nvme0n3: ios=3072/3127, merge=0/0, ticks=33821/61337, in_queue=95158, util=88.38% 00:34:46.021 nvme0n4: ios=7680/8085, merge=0/0, ticks=52068/47840, in_queue=99908, util=89.52% 00:34:46.021 19:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:46.021 [global] 00:34:46.021 thread=1 00:34:46.021 invalidate=1 00:34:46.021 rw=randwrite 00:34:46.021 time_based=1 00:34:46.021 runtime=1 00:34:46.021 ioengine=libaio 00:34:46.021 direct=1 00:34:46.021 bs=4096 00:34:46.021 iodepth=128 00:34:46.021 norandommap=0 00:34:46.021 numjobs=1 00:34:46.021 00:34:46.021 verify_dump=1 00:34:46.021 verify_backlog=512 00:34:46.021 verify_state_save=0 00:34:46.021 do_verify=1 00:34:46.021 verify=crc32c-intel 00:34:46.021 [job0] 00:34:46.021 filename=/dev/nvme0n1 00:34:46.021 [job1] 00:34:46.021 filename=/dev/nvme0n2 00:34:46.021 [job2] 00:34:46.021 filename=/dev/nvme0n3 00:34:46.021 [job3] 00:34:46.021 filename=/dev/nvme0n4 00:34:46.021 Could not set queue depth (nvme0n1) 00:34:46.021 Could not set queue depth (nvme0n2) 00:34:46.021 Could not set queue depth (nvme0n3) 00:34:46.021 Could not set queue depth (nvme0n4) 00:34:46.282 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.282 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.282 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.282 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.282 fio-3.35 00:34:46.282 Starting 4 threads 00:34:47.668 00:34:47.668 job0: (groupid=0, jobs=1): err= 0: pid=3219064: Tue Nov 26 19:25:04 2024 00:34:47.668 read: IOPS=5976, BW=23.3MiB/s (24.5MB/s)(23.5MiB/1005msec) 00:34:47.668 slat (nsec): min=911, max=11386k, avg=67163.14, stdev=485617.72 00:34:47.668 clat (usec): min=971, max=43324, avg=8842.04, stdev=4035.35 00:34:47.668 lat (usec): min=1854, max=43334, avg=8909.20, stdev=4081.21 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 3851], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 6980], 00:34:47.668 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8455], 00:34:47.668 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[12649], 95.00th=[16909], 00:34:47.668 | 99.00th=[27919], 99.50th=[34341], 99.90th=[39584], 99.95th=[43254], 00:34:47.668 | 99.99th=[43254] 00:34:47.668 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:34:47.668 slat (nsec): min=1492, max=8983.8k, avg=84505.15, stdev=526288.91 00:34:47.668 clat (usec): min=1220, max=75417, avg=12102.04, stdev=11067.51 00:34:47.668 lat (usec): min=1231, max=75427, avg=12186.55, stdev=11138.13 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 2311], 5.00th=[ 4228], 10.00th=[ 5080], 20.00th=[ 6587], 00:34:47.668 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8586], 00:34:47.668 | 70.00th=[ 9110], 80.00th=[13698], 90.00th=[25822], 95.00th=[37487], 00:34:47.668 | 99.00th=[59507], 99.50th=[67634], 99.90th=[74974], 99.95th=[74974], 00:34:47.668 | 99.99th=[74974] 00:34:47.668 bw ( KiB/s): min=20480, max=28672, per=25.32%, avg=24576.00, stdev=5792.62, samples=2 00:34:47.668 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:34:47.668 lat (usec) : 1000=0.01% 00:34:47.668 lat (msec) : 2=0.44%, 4=2.47%, 10=75.91%, 20=12.18%, 50=7.96% 00:34:47.668 lat (msec) : 100=1.03% 00:34:47.668 cpu : usr=4.48%, sys=6.08%, ctx=448, majf=0, minf=1 00:34:47.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.668 issued rwts: total=6006,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.668 job1: (groupid=0, jobs=1): err= 0: pid=3219070: Tue Nov 26 19:25:04 2024 00:34:47.668 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:34:47.668 slat (nsec): min=947, max=7244.6k, avg=83951.92, stdev=487557.52 00:34:47.668 clat (usec): min=5228, max=27089, avg=10557.38, stdev=3620.10 00:34:47.668 lat (usec): min=5230, max=27116, avg=10641.33, stdev=3667.14 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 6063], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 8094], 00:34:47.668 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9765], 00:34:47.668 | 70.00th=[11076], 80.00th=[12649], 90.00th=[16581], 95.00th=[19006], 00:34:47.668 | 99.00th=[21627], 99.50th=[21627], 99.90th=[25297], 99.95th=[26084], 00:34:47.668 | 99.99th=[27132] 00:34:47.668 write: IOPS=6398, BW=25.0MiB/s (26.2MB/s)(25.1MiB/1003msec); 0 zone resets 00:34:47.668 slat (nsec): min=1566, max=9025.6k, avg=71776.38, stdev=390841.71 00:34:47.668 clat (usec): min=1864, max=24954, avg=9636.69, stdev=3760.79 00:34:47.668 lat (usec): min=2580, max=24988, avg=9708.47, stdev=3795.58 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6849], 00:34:47.668 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8586], 60.00th=[ 9110], 00:34:47.668 | 70.00th=[10421], 80.00th=[12256], 90.00th=[14746], 95.00th=[18744], 00:34:47.668 | 99.00th=[21365], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:34:47.668 | 99.99th=[25035] 00:34:47.668 bw ( KiB/s): min=24560, max=25768, per=25.92%, avg=25164.00, stdev=854.18, samples=2 00:34:47.668 iops : min= 6140, max= 6442, avg=6291.00, stdev=213.55, samples=2 00:34:47.668 lat (msec) : 2=0.01%, 4=0.25%, 10=64.68%, 20=31.79%, 50=3.26% 00:34:47.668 cpu : usr=2.40%, sys=5.49%, ctx=759, majf=0, minf=1 00:34:47.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.668 issued rwts: total=6144,6418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.668 job2: (groupid=0, jobs=1): err= 0: pid=3219080: Tue Nov 26 19:25:04 2024 00:34:47.668 read: IOPS=5103, BW=19.9MiB/s (20.9MB/s)(20.8MiB/1045msec) 00:34:47.668 slat (nsec): min=933, max=14774k, avg=106077.62, stdev=765299.71 00:34:47.668 clat (usec): min=4121, max=52037, avg=14400.34, stdev=7453.08 00:34:47.668 lat (usec): min=4130, max=54540, avg=14506.42, stdev=7497.58 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9372], 00:34:47.668 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[12256], 60.00th=[14091], 00:34:47.668 | 70.00th=[15270], 80.00th=[18744], 90.00th=[22676], 95.00th=[25822], 00:34:47.668 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:34:47.668 | 99.99th=[52167] 00:34:47.668 write: IOPS=5389, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1045msec); 0 zone resets 00:34:47.668 slat (nsec): min=1519, max=7242.6k, avg=71941.11, stdev=454871.43 00:34:47.668 clat (usec): min=1166, max=34953, avg=9899.59, stdev=2919.65 00:34:47.668 lat (usec): min=1178, max=34975, avg=9971.53, stdev=2949.00 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 5407], 5.00th=[ 6390], 10.00th=[ 7504], 20.00th=[ 8160], 00:34:47.668 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:47.668 | 70.00th=[10421], 80.00th=[11338], 90.00th=[12649], 95.00th=[13829], 00:34:47.668 | 99.00th=[22676], 99.50th=[31589], 99.90th=[31589], 99.95th=[33817], 00:34:47.668 | 99.99th=[34866] 00:34:47.668 bw ( KiB/s): min=20480, max=24576, per=23.21%, avg=22528.00, stdev=2896.31, samples=2 00:34:47.668 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:34:47.668 lat (msec) : 2=0.02%, 4=0.07%, 10=49.00%, 20=42.90%, 50=7.43% 00:34:47.668 lat (msec) : 100=0.57% 00:34:47.668 cpu : usr=3.16%, sys=5.36%, ctx=398, majf=0, minf=1 00:34:47.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.668 issued rwts: total=5333,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.668 job3: (groupid=0, jobs=1): err= 0: pid=3219084: Tue Nov 26 19:25:04 2024 00:34:47.668 read: IOPS=6874, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1005msec) 00:34:47.668 slat (nsec): min=931, max=9363.3k, avg=72233.99, stdev=513620.75 00:34:47.668 clat (usec): min=1277, max=25409, avg=9679.70, stdev=2915.70 00:34:47.668 lat (usec): min=2821, max=25414, avg=9751.93, stdev=2947.48 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 3916], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7701], 00:34:47.668 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[ 9765], 00:34:47.668 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13829], 95.00th=[15139], 00:34:47.668 | 99.00th=[19530], 99.50th=[20579], 99.90th=[25297], 99.95th=[25297], 00:34:47.668 | 99.99th=[25297] 00:34:47.668 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:34:47.668 slat (nsec): min=1531, max=7609.4k, avg=59808.09, stdev=417162.49 00:34:47.668 clat (usec): min=478, max=53197, avg=8461.82, stdev=4167.74 00:34:47.668 lat (usec): min=636, max=53201, avg=8521.63, stdev=4188.83 00:34:47.668 clat percentiles (usec): 00:34:47.668 | 1.00th=[ 2008], 5.00th=[ 3916], 10.00th=[ 4817], 20.00th=[ 5866], 00:34:47.668 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8848], 00:34:47.668 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[12125], 00:34:47.668 | 99.00th=[30278], 99.50th=[36439], 99.90th=[50594], 99.95th=[53216], 00:34:47.668 | 99.99th=[53216] 00:34:47.668 bw ( KiB/s): min=27976, max=29368, per=29.53%, avg=28672.00, stdev=984.29, samples=2 00:34:47.668 iops : min= 6994, max= 7342, avg=7168.00, stdev=246.07, samples=2 00:34:47.668 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:34:47.668 lat (msec) : 2=0.41%, 4=2.82%, 10=71.86%, 20=23.44%, 50=1.29% 00:34:47.668 lat (msec) : 100=0.10% 00:34:47.668 cpu : usr=4.68%, sys=6.97%, ctx=508, majf=0, minf=2 00:34:47.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:47.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.668 issued rwts: total=6909,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.668 00:34:47.668 Run status group 0 (all jobs): 00:34:47.668 READ: bw=91.2MiB/s (95.6MB/s), 19.9MiB/s-26.9MiB/s (20.9MB/s-28.2MB/s), io=95.3MiB (99.9MB), run=1003-1045msec 00:34:47.668 WRITE: bw=94.8MiB/s (99.4MB/s), 21.1MiB/s-27.9MiB/s (22.1MB/s-29.2MB/s), io=99.1MiB (104MB), run=1003-1045msec 00:34:47.668 00:34:47.668 Disk stats (read/write): 00:34:47.668 nvme0n1: ios=4658/4930, merge=0/0, ticks=27595/48292, in_queue=75887, util=87.98% 00:34:47.668 nvme0n2: ios=5105/5120, merge=0/0, ticks=18163/15663, in_queue=33826, util=100.00% 00:34:47.668 nvme0n3: ios=4158/4608, merge=0/0, ticks=27838/22400, in_queue=50238, util=88.17% 00:34:47.668 nvme0n4: ios=5632/6143, merge=0/0, ticks=36114/35295, in_queue=71409, util=88.68% 00:34:47.668 19:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:47.668 19:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3219393 00:34:47.668 19:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:47.669 19:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:47.669 [global] 00:34:47.669 thread=1 00:34:47.669 invalidate=1 00:34:47.669 rw=read 00:34:47.669 time_based=1 00:34:47.669 runtime=10 00:34:47.669 ioengine=libaio 00:34:47.669 direct=1 00:34:47.669 bs=4096 00:34:47.669 iodepth=1 00:34:47.669 norandommap=1 00:34:47.669 numjobs=1 00:34:47.669 00:34:47.669 [job0] 00:34:47.669 filename=/dev/nvme0n1 00:34:47.669 [job1] 00:34:47.669 filename=/dev/nvme0n2 00:34:47.669 [job2] 00:34:47.669 filename=/dev/nvme0n3 00:34:47.669 [job3] 00:34:47.669 filename=/dev/nvme0n4 00:34:47.669 Could not set queue depth (nvme0n1) 00:34:47.669 Could not set queue depth (nvme0n2) 00:34:47.669 Could not set queue depth (nvme0n3) 00:34:47.669 Could not set queue depth (nvme0n4) 00:34:47.927 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.927 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.927 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.927 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.927 fio-3.35 00:34:47.927 Starting 4 threads 00:34:51.226 19:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:51.226 19:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:51.226 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10084352, buflen=4096 00:34:51.226 fio: pid=3219590, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.226 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1597440, buflen=4096 00:34:51.226 fio: pid=3219589, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:51.226 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14331904, buflen=4096 00:34:51.226 fio: pid=3219586, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.226 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:51.487 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5025792, buflen=4096 00:34:51.487 fio: pid=3219587, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.487 00:34:51.487 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3219586: Tue Nov 26 19:25:08 2024 00:34:51.487 read: IOPS=1182, BW=4727KiB/s (4840kB/s)(13.7MiB/2961msec) 00:34:51.487 slat (usec): min=6, max=35589, avg=37.89, stdev=653.71 00:34:51.487 clat (usec): min=289, max=41536, avg=796.39, stdev=1188.29 00:34:51.487 lat (usec): min=315, max=41562, avg=834.28, stdev=1356.27 00:34:51.487 clat percentiles (usec): 00:34:51.487 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 709], 00:34:51.487 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 783], 00:34:51.487 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 857], 00:34:51.487 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 4621], 99.95th=[41157], 00:34:51.487 | 99.99th=[41681] 00:34:51.487 bw ( KiB/s): min= 4976, max= 5152, per=52.75%, avg=5065.60, stdev=62.33, samples=5 00:34:51.487 iops : min= 1244, max= 1288, avg=1266.40, stdev=15.58, samples=5 00:34:51.487 lat (usec) : 500=0.37%, 750=35.89%, 1000=63.37% 00:34:51.487 lat (msec) : 2=0.23%, 10=0.03%, 50=0.09% 00:34:51.487 cpu : usr=1.35%, sys=3.04%, ctx=3503, majf=0, minf=1 00:34:51.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.487 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 issued rwts: total=3500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.488 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3219587: Tue Nov 26 19:25:08 2024 00:34:51.488 read: IOPS=388, BW=1555KiB/s (1592kB/s)(4908KiB/3157msec) 00:34:51.488 slat (usec): min=6, max=31907, avg=78.97, stdev=1163.96 00:34:51.488 clat (usec): min=301, max=42195, avg=2470.68, stdev=8002.82 00:34:51.488 lat (usec): min=326, max=73154, avg=2549.69, stdev=8206.57 00:34:51.488 clat percentiles (usec): 00:34:51.488 | 1.00th=[ 506], 5.00th=[ 570], 10.00th=[ 627], 20.00th=[ 668], 00:34:51.488 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 783], 60.00th=[ 824], 00:34:51.488 | 70.00th=[ 881], 80.00th=[ 1074], 90.00th=[ 1352], 95.00th=[ 1500], 00:34:51.488 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:51.488 | 99.99th=[42206] 00:34:51.488 bw ( KiB/s): min= 96, max= 5200, per=16.71%, avg=1604.33, stdev=1819.48, samples=6 00:34:51.488 iops : min= 24, max= 1300, avg=401.00, stdev=454.90, samples=6 00:34:51.488 lat (usec) : 500=0.98%, 750=41.94%, 1000=34.61% 00:34:51.488 lat (msec) : 2=18.32%, 4=0.08%, 50=3.99% 00:34:51.488 cpu : usr=0.32%, sys=1.14%, ctx=1233, majf=0, minf=2 00:34:51.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 issued rwts: total=1228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.488 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3219589: Tue Nov 26 19:25:08 2024 00:34:51.488 read: IOPS=141, BW=565KiB/s (579kB/s)(1560KiB/2761msec) 00:34:51.488 slat (usec): min=7, max=21954, avg=117.42, stdev=1308.30 00:34:51.488 clat (usec): min=399, max=42021, avg=6899.82, stdev=14038.38 00:34:51.488 lat (usec): min=426, max=42048, avg=7017.47, stdev=14061.80 00:34:51.488 clat percentiles (usec): 00:34:51.488 | 1.00th=[ 457], 5.00th=[ 766], 10.00th=[ 898], 20.00th=[ 1012], 00:34:51.488 | 30.00th=[ 1074], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1270], 00:34:51.488 | 70.00th=[ 1336], 80.00th=[ 1467], 90.00th=[41157], 95.00th=[41157], 00:34:51.488 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:51.488 | 99.99th=[42206] 00:34:51.488 bw ( KiB/s): min= 96, max= 1312, per=4.11%, avg=395.20, stdev=525.62, samples=5 00:34:51.488 iops : min= 24, max= 328, avg=98.80, stdev=131.40, samples=5 00:34:51.488 lat (usec) : 500=1.28%, 750=3.58%, 1000=13.30% 00:34:51.488 lat (msec) : 2=67.01%, 10=0.26%, 50=14.32% 00:34:51.488 cpu : usr=0.11%, sys=0.47%, ctx=394, majf=0, minf=2 00:34:51.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 issued rwts: total=391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.488 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3219590: Tue Nov 26 19:25:08 2024 00:34:51.488 read: IOPS=936, BW=3743KiB/s (3833kB/s)(9848KiB/2631msec) 00:34:51.488 slat (nsec): min=6871, max=60482, avg=26008.16, stdev=3224.84 00:34:51.488 clat (usec): min=225, max=42080, avg=1027.32, stdev=1168.14 00:34:51.488 lat (usec): min=251, max=42105, avg=1053.33, stdev=1167.92 00:34:51.488 clat percentiles (usec): 00:34:51.488 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 898], 00:34:51.488 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:34:51.488 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1205], 00:34:51.488 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1926], 99.95th=[41157], 00:34:51.488 | 99.99th=[42206] 00:34:51.488 bw ( KiB/s): min= 3744, max= 4016, per=40.59%, avg=3897.60, stdev=115.57, samples=5 00:34:51.488 iops : min= 936, max= 1004, avg=974.40, stdev=28.89, samples=5 00:34:51.488 lat (usec) : 250=0.08%, 500=0.04%, 750=2.48%, 1000=49.49% 00:34:51.488 lat (msec) : 2=47.79%, 50=0.08% 00:34:51.488 cpu : usr=1.18%, sys=2.70%, ctx=2463, majf=0, minf=2 00:34:51.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.488 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.488 00:34:51.488 Run status group 0 (all jobs): 00:34:51.488 READ: bw=9602KiB/s (9832kB/s), 565KiB/s-4727KiB/s (579kB/s-4840kB/s), io=29.6MiB (31.0MB), run=2631-3157msec 00:34:51.488 00:34:51.488 Disk stats (read/write): 00:34:51.488 nvme0n1: ios=3489/0, merge=0/0, ticks=2586/0, in_queue=2586, util=93.12% 00:34:51.488 nvme0n2: ios=1225/0, merge=0/0, ticks=2926/0, in_queue=2926, util=93.62% 00:34:51.488 nvme0n3: ios=321/0, merge=0/0, ticks=2565/0, in_queue=2565, util=95.99% 00:34:51.488 nvme0n4: ios=2461/0, merge=0/0, ticks=2428/0, in_queue=2428, util=96.39% 00:34:51.488 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.488 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:51.748 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.748 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:52.009 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.009 19:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:52.009 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.009 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3219393 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:52.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:52.269 nvmf hotplug test: fio failed as expected 00:34:52.269 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.530 rmmod nvme_tcp 00:34:52.530 rmmod nvme_fabrics 00:34:52.530 rmmod nvme_keyring 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3216229 ']' 00:34:52.530 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3216229 00:34:52.531 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3216229 ']' 00:34:52.531 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3216229 00:34:52.531 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:52.531 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.531 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3216229 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3216229' 00:34:52.791 killing process with pid 3216229 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3216229 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3216229 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.791 19:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.337 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.337 00:34:55.337 real 0m28.217s 00:34:55.337 user 2m22.755s 00:34:55.337 sys 0m12.035s 00:34:55.337 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.337 19:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.337 ************************************ 00:34:55.337 END TEST nvmf_fio_target 00:34:55.337 ************************************ 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:55.337 ************************************ 00:34:55.337 START TEST nvmf_bdevio 00:34:55.337 ************************************ 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.337 * Looking for test storage... 00:34:55.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.337 --rc genhtml_branch_coverage=1 00:34:55.337 --rc genhtml_function_coverage=1 00:34:55.337 --rc genhtml_legend=1 00:34:55.337 --rc geninfo_all_blocks=1 00:34:55.337 --rc geninfo_unexecuted_blocks=1 00:34:55.337 00:34:55.337 ' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.337 --rc genhtml_branch_coverage=1 00:34:55.337 --rc genhtml_function_coverage=1 00:34:55.337 --rc genhtml_legend=1 00:34:55.337 --rc geninfo_all_blocks=1 00:34:55.337 --rc geninfo_unexecuted_blocks=1 00:34:55.337 00:34:55.337 ' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.337 --rc genhtml_branch_coverage=1 00:34:55.337 --rc genhtml_function_coverage=1 00:34:55.337 --rc genhtml_legend=1 00:34:55.337 --rc geninfo_all_blocks=1 00:34:55.337 --rc geninfo_unexecuted_blocks=1 00:34:55.337 00:34:55.337 ' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:55.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.337 --rc genhtml_branch_coverage=1 00:34:55.337 --rc genhtml_function_coverage=1 00:34:55.337 --rc genhtml_legend=1 00:34:55.337 --rc geninfo_all_blocks=1 00:34:55.337 --rc geninfo_unexecuted_blocks=1 00:34:55.337 00:34:55.337 ' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.337 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.338 19:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:03.487 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:03.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:03.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:03.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:03.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:35:03.488 00:35:03.488 --- 10.0.0.2 ping statistics --- 00:35:03.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.488 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:35:03.488 00:35:03.488 --- 10.0.0.1 ping statistics --- 00:35:03.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.488 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:03.488 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3224613 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3224613 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3224613 ']' 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.489 19:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.489 [2024-11-26 19:25:19.879391] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:03.489 [2024-11-26 19:25:19.880541] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:35:03.489 [2024-11-26 19:25:19.880595] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.489 [2024-11-26 19:25:19.980746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.489 [2024-11-26 19:25:20.039367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.489 [2024-11-26 19:25:20.039425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.489 [2024-11-26 19:25:20.039434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.489 [2024-11-26 19:25:20.039441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.489 [2024-11-26 19:25:20.039447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.489 [2024-11-26 19:25:20.041468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:03.489 [2024-11-26 19:25:20.041721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:03.489 [2024-11-26 19:25:20.041883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:03.489 [2024-11-26 19:25:20.041886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.489 [2024-11-26 19:25:20.127200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:03.489 [2024-11-26 19:25:20.127714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:03.489 [2024-11-26 19:25:20.128232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:03.489 [2024-11-26 19:25:20.128738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:03.489 [2024-11-26 19:25:20.128777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:03.489 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.489 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:03.489 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:03.489 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.489 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 [2024-11-26 19:25:20.738906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 Malloc0 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.751 [2024-11-26 19:25:20.831280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.751 { 00:35:03.751 "params": { 00:35:03.751 "name": "Nvme$subsystem", 00:35:03.751 "trtype": "$TEST_TRANSPORT", 00:35:03.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.751 "adrfam": "ipv4", 00:35:03.751 "trsvcid": "$NVMF_PORT", 00:35:03.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.751 "hdgst": ${hdgst:-false}, 00:35:03.751 "ddgst": ${ddgst:-false} 00:35:03.751 }, 00:35:03.751 "method": "bdev_nvme_attach_controller" 00:35:03.751 } 00:35:03.751 EOF 00:35:03.751 )") 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:03.751 19:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:03.751 "params": { 00:35:03.751 "name": "Nvme1", 00:35:03.751 "trtype": "tcp", 00:35:03.751 "traddr": "10.0.0.2", 00:35:03.751 "adrfam": "ipv4", 00:35:03.751 "trsvcid": "4420", 00:35:03.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:03.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:03.751 "hdgst": false, 00:35:03.751 "ddgst": false 00:35:03.751 }, 00:35:03.751 "method": "bdev_nvme_attach_controller" 00:35:03.751 }' 00:35:03.751 [2024-11-26 19:25:20.878430] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:35:03.751 [2024-11-26 19:25:20.878511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224963 ] 00:35:04.012 [2024-11-26 19:25:20.972367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:04.012 [2024-11-26 19:25:21.028489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.012 [2024-11-26 19:25:21.028651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.012 [2024-11-26 19:25:21.028651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.273 I/O targets: 00:35:04.273 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:04.273 00:35:04.273 00:35:04.273 CUnit - A unit testing framework for C - Version 2.1-3 00:35:04.273 http://cunit.sourceforge.net/ 00:35:04.273 00:35:04.273 00:35:04.273 Suite: bdevio tests on: Nvme1n1 00:35:04.273 Test: blockdev write read block ...passed 00:35:04.273 Test: blockdev write zeroes read block ...passed 00:35:04.273 Test: blockdev write zeroes read no split ...passed 00:35:04.535 Test: blockdev write zeroes read split ...passed 00:35:04.535 Test: blockdev write zeroes read split partial ...passed 00:35:04.535 Test: blockdev reset ...[2024-11-26 19:25:21.566136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:04.535 [2024-11-26 19:25:21.566246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252c970 (9): Bad file descriptor 00:35:04.535 [2024-11-26 19:25:21.573597] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:04.535 passed 00:35:04.535 Test: blockdev write read 8 blocks ...passed 00:35:04.535 Test: blockdev write read size > 128k ...passed 00:35:04.535 Test: blockdev write read invalid size ...passed 00:35:04.535 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:04.535 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:04.535 Test: blockdev write read max offset ...passed 00:35:04.797 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:04.797 Test: blockdev writev readv 8 blocks ...passed 00:35:04.797 Test: blockdev writev readv 30 x 1block ...passed 00:35:04.797 Test: blockdev writev readv block ...passed 00:35:04.797 Test: blockdev writev readv size > 128k ...passed 00:35:04.797 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:04.797 Test: blockdev comparev and writev ...[2024-11-26 19:25:21.843008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.843059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.843076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.843096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.843719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.843733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.843747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.843755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.844413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.844426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.844440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.844448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.845070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.845084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.845098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:04.797 [2024-11-26 19:25:21.845106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:04.797 passed 00:35:04.797 Test: blockdev nvme passthru rw ...passed 00:35:04.797 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:25:21.930014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:04.797 [2024-11-26 19:25:21.930032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.930444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:04.797 [2024-11-26 19:25:21.930455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.930855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:04.797 [2024-11-26 19:25:21.930865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:04.797 [2024-11-26 19:25:21.931224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:04.797 [2024-11-26 19:25:21.931236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:04.797 passed 00:35:04.797 Test: blockdev nvme admin passthru ...passed 00:35:04.797 Test: blockdev copy ...passed 00:35:04.797 00:35:04.797 Run Summary: Type Total Ran Passed Failed Inactive 00:35:04.797 suites 1 1 n/a 0 0 00:35:04.797 tests 23 23 23 0 0 00:35:04.797 asserts 152 152 152 0 n/a 00:35:04.797 00:35:04.797 Elapsed time = 1.277 seconds 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.059 rmmod nvme_tcp 00:35:05.059 rmmod nvme_fabrics 00:35:05.059 rmmod nvme_keyring 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3224613 ']' 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3224613 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3224613 ']' 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3224613 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.059 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3224613 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3224613' 00:35:05.321 killing process with pid 3224613 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3224613 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3224613 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.321 19:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.869 00:35:07.869 real 0m12.508s 00:35:07.869 user 0m10.867s 00:35:07.869 sys 0m6.572s 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:07.869 ************************************ 00:35:07.869 END TEST nvmf_bdevio 00:35:07.869 ************************************ 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:07.869 00:35:07.869 real 5m2.936s 00:35:07.869 user 10m28.010s 00:35:07.869 sys 2m7.083s 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.869 19:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.869 ************************************ 00:35:07.869 END TEST nvmf_target_core_interrupt_mode 00:35:07.869 ************************************ 00:35:07.869 19:25:24 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:07.869 19:25:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:07.869 19:25:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.869 19:25:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:07.869 ************************************ 00:35:07.869 START TEST nvmf_interrupt 00:35:07.869 ************************************ 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:07.869 * Looking for test storage... 00:35:07.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.869 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:07.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.870 --rc genhtml_branch_coverage=1 00:35:07.870 --rc genhtml_function_coverage=1 00:35:07.870 --rc genhtml_legend=1 00:35:07.870 --rc geninfo_all_blocks=1 00:35:07.870 --rc geninfo_unexecuted_blocks=1 00:35:07.870 00:35:07.870 ' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:07.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.870 --rc genhtml_branch_coverage=1 00:35:07.870 --rc genhtml_function_coverage=1 00:35:07.870 --rc genhtml_legend=1 00:35:07.870 --rc geninfo_all_blocks=1 00:35:07.870 --rc geninfo_unexecuted_blocks=1 00:35:07.870 00:35:07.870 ' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:07.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.870 --rc genhtml_branch_coverage=1 00:35:07.870 --rc genhtml_function_coverage=1 00:35:07.870 --rc genhtml_legend=1 00:35:07.870 --rc geninfo_all_blocks=1 00:35:07.870 --rc geninfo_unexecuted_blocks=1 00:35:07.870 00:35:07.870 ' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:07.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.870 --rc genhtml_branch_coverage=1 00:35:07.870 --rc genhtml_function_coverage=1 00:35:07.870 --rc genhtml_legend=1 00:35:07.870 --rc geninfo_all_blocks=1 00:35:07.870 --rc geninfo_unexecuted_blocks=1 00:35:07.870 00:35:07.870 ' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.870 19:25:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.022 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:16.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:16.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:16.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:16.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:35:16.023 00:35:16.023 --- 10.0.0.2 ping statistics --- 00:35:16.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.023 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:35:16.023 00:35:16.023 --- 10.0.0.1 ping statistics --- 00:35:16.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.023 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3229309 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3229309 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3229309 ']' 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.023 19:25:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.023 [2024-11-26 19:25:32.527995] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:16.023 [2024-11-26 19:25:32.529141] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:35:16.023 [2024-11-26 19:25:32.529203] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.023 [2024-11-26 19:25:32.629651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:16.023 [2024-11-26 19:25:32.682685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.023 [2024-11-26 19:25:32.682739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.023 [2024-11-26 19:25:32.682748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.023 [2024-11-26 19:25:32.682756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.023 [2024-11-26 19:25:32.682763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.023 [2024-11-26 19:25:32.684359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.023 [2024-11-26 19:25:32.684388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.023 [2024-11-26 19:25:32.762876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.024 [2024-11-26 19:25:32.763563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.024 [2024-11-26 19:25:32.763831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.313 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.313 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:16.313 19:25:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.313 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:16.314 5000+0 records in 00:35:16.314 5000+0 records out 00:35:16.314 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188518 s, 543 MB/s 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.314 AIO0 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.314 [2024-11-26 19:25:33.481463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:16.314 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.575 [2024-11-26 19:25:33.526035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3229309 0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 0 idle 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229309 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0' 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229309 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.32 reactor_0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3229309 1 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 1 idle 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:16.575 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229314 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229314 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3229686 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3229309 0 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3229309 0 busy 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:16.836 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:16.837 19:25:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229309 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.33 reactor_0' 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229309 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.33 reactor_0 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.097 19:25:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:18.039 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:18.039 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.039 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:18.039 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229309 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.54 reactor_0' 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229309 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.54 reactor_0 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.299 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3229309 1 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3229309 1 busy 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229314 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1' 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229314 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:18.300 19:25:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3229686 00:35:28.293 Initializing NVMe Controllers 00:35:28.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:28.293 Controller IO queue size 256, less than required. 00:35:28.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:28.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:28.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:28.293 Initialization complete. Launching workers. 00:35:28.293 ======================================================== 00:35:28.293 Latency(us) 00:35:28.293 Device Information : IOPS MiB/s Average min max 00:35:28.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19275.67 75.30 13285.61 4092.61 32625.53 00:35:28.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20241.17 79.07 12648.94 7734.32 30641.93 00:35:28.293 ======================================================== 00:35:28.293 Total : 39516.84 154.36 12959.50 4092.61 32625.53 00:35:28.293 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3229309 0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 0 idle 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229309 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229309 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.32 reactor_0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3229309 1 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 1 idle 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229314 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229314 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:28.293 19:25:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:28.293 19:25:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:28.293 19:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:28.293 19:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:28.293 19:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:28.293 19:25:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3229309 0 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 0 idle 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:30.205 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229309 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229309 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3229309 1 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3229309 1 idle 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3229309 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3229309 -w 256 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3229314 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3229314 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.465 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:30.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.725 rmmod nvme_tcp 00:35:30.725 rmmod nvme_fabrics 00:35:30.725 rmmod nvme_keyring 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3229309 ']' 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3229309 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3229309 ']' 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3229309 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.725 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3229309 00:35:30.985 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.985 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.985 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3229309' 00:35:30.985 killing process with pid 3229309 00:35:30.985 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3229309 00:35:30.985 19:25:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3229309 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:30.985 19:25:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.527 19:25:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:33.527 00:35:33.527 real 0m25.531s 00:35:33.527 user 0m40.536s 00:35:33.527 sys 0m9.748s 00:35:33.527 19:25:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.527 19:25:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:33.527 ************************************ 00:35:33.527 END TEST nvmf_interrupt 00:35:33.527 ************************************ 00:35:33.527 00:35:33.527 real 30m12.678s 00:35:33.527 user 61m43.657s 00:35:33.527 sys 10m22.098s 00:35:33.527 19:25:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.527 19:25:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.527 ************************************ 00:35:33.527 END TEST nvmf_tcp 00:35:33.527 ************************************ 00:35:33.527 19:25:50 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:33.527 19:25:50 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.527 19:25:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:33.527 19:25:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:33.527 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:35:33.527 ************************************ 00:35:33.527 START TEST spdkcli_nvmf_tcp 00:35:33.527 ************************************ 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.527 * Looking for test storage... 00:35:33.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:33.527 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:33.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.528 --rc genhtml_branch_coverage=1 00:35:33.528 --rc genhtml_function_coverage=1 00:35:33.528 --rc genhtml_legend=1 00:35:33.528 --rc geninfo_all_blocks=1 00:35:33.528 --rc geninfo_unexecuted_blocks=1 00:35:33.528 00:35:33.528 ' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:33.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.528 --rc genhtml_branch_coverage=1 00:35:33.528 --rc genhtml_function_coverage=1 00:35:33.528 --rc genhtml_legend=1 00:35:33.528 --rc geninfo_all_blocks=1 00:35:33.528 --rc geninfo_unexecuted_blocks=1 00:35:33.528 00:35:33.528 ' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:33.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.528 --rc genhtml_branch_coverage=1 00:35:33.528 --rc genhtml_function_coverage=1 00:35:33.528 --rc genhtml_legend=1 00:35:33.528 --rc geninfo_all_blocks=1 00:35:33.528 --rc geninfo_unexecuted_blocks=1 00:35:33.528 00:35:33.528 ' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:33.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.528 --rc genhtml_branch_coverage=1 00:35:33.528 --rc genhtml_function_coverage=1 00:35:33.528 --rc genhtml_legend=1 00:35:33.528 --rc geninfo_all_blocks=1 00:35:33.528 --rc geninfo_unexecuted_blocks=1 00:35:33.528 00:35:33.528 ' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:33.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3232868 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3232868 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3232868 ']' 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.528 19:25:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.528 [2024-11-26 19:25:50.644432] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:35:33.528 [2024-11-26 19:25:50.644495] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232868 ] 00:35:33.528 [2024-11-26 19:25:50.734421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:33.824 [2024-11-26 19:25:50.772875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.824 [2024-11-26 19:25:50.772878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.397 19:25:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:34.397 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:34.397 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:34.397 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:34.397 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:34.397 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:34.397 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:34.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:34.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:34.397 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:34.397 ' 00:35:37.697 [2024-11-26 19:25:54.239123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.637 [2024-11-26 19:25:55.603292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:41.179 [2024-11-26 19:25:58.130312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:43.743 [2024-11-26 19:26:00.336730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:45.126 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:45.126 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:45.126 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:45.127 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:45.127 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:45.127 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:45.127 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:45.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.127 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:45.127 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:45.127 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:45.127 19:26:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:45.388 19:26:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:45.388 19:26:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.649 19:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:45.649 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:45.649 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.649 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:45.649 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:45.649 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:45.649 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:45.649 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:45.649 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:45.649 ' 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:52.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:52.230 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:52.230 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:52.230 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3232868 ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232868' 00:35:52.230 killing process with pid 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3232868 ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3232868 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3232868 ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3232868 00:35:52.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3232868) - No such process 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3232868 is not found' 00:35:52.230 Process with pid 3232868 is not found 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:52.230 00:35:52.230 real 0m18.177s 00:35:52.230 user 0m40.386s 00:35:52.230 sys 0m0.885s 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.230 19:26:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.230 ************************************ 00:35:52.230 END TEST spdkcli_nvmf_tcp 00:35:52.230 ************************************ 00:35:52.230 19:26:08 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.230 19:26:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.230 19:26:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.230 19:26:08 -- common/autotest_common.sh@10 -- # set +x 00:35:52.230 ************************************ 00:35:52.230 START TEST nvmf_identify_passthru 00:35:52.230 ************************************ 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.230 * Looking for test storage... 00:35:52.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.230 19:26:08 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.230 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.231 --rc genhtml_branch_coverage=1 00:35:52.231 --rc genhtml_function_coverage=1 00:35:52.231 --rc genhtml_legend=1 00:35:52.231 --rc geninfo_all_blocks=1 00:35:52.231 --rc geninfo_unexecuted_blocks=1 00:35:52.231 00:35:52.231 ' 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.231 --rc genhtml_branch_coverage=1 00:35:52.231 --rc genhtml_function_coverage=1 00:35:52.231 --rc genhtml_legend=1 00:35:52.231 --rc geninfo_all_blocks=1 00:35:52.231 --rc geninfo_unexecuted_blocks=1 00:35:52.231 00:35:52.231 ' 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.231 --rc genhtml_branch_coverage=1 00:35:52.231 --rc genhtml_function_coverage=1 00:35:52.231 --rc genhtml_legend=1 00:35:52.231 --rc geninfo_all_blocks=1 00:35:52.231 --rc geninfo_unexecuted_blocks=1 00:35:52.231 00:35:52.231 ' 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.231 --rc genhtml_branch_coverage=1 00:35:52.231 --rc genhtml_function_coverage=1 00:35:52.231 --rc genhtml_legend=1 00:35:52.231 --rc geninfo_all_blocks=1 00:35:52.231 --rc geninfo_unexecuted_blocks=1 00:35:52.231 00:35:52.231 ' 00:35:52.231 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.231 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.231 19:26:08 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.231 19:26:08 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.231 19:26:08 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.231 19:26:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:00.374 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:00.375 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:00.375 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:00.375 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:00.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:00.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:36:00.375 00:36:00.375 --- 10.0.0.2 ping statistics --- 00:36:00.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.375 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:00.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:36:00.375 00:36:00.375 --- 10.0.0.1 ping statistics --- 00:36:00.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.375 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:00.375 19:26:16 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:00.375 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.375 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:00.375 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:00.376 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:00.376 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:00.376 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:00.376 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:00.376 19:26:16 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:00.376 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:00.376 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:00.376 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:00.376 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:00.376 19:26:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:00.376 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.376 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.376 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:00.376 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.376 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.637 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3240278 00:36:00.637 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:00.637 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:00.637 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3240278 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3240278 ']' 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.637 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.637 [2024-11-26 19:26:17.643417] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:36:00.637 [2024-11-26 19:26:17.643485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.637 [2024-11-26 19:26:17.745050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.637 [2024-11-26 19:26:17.798697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.637 [2024-11-26 19:26:17.798751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.637 [2024-11-26 19:26:17.798759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.637 [2024-11-26 19:26:17.798767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.637 [2024-11-26 19:26:17.798774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.637 [2024-11-26 19:26:17.800826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.637 [2024-11-26 19:26:17.800990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.637 [2024-11-26 19:26:17.801019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:00.637 [2024-11-26 19:26:17.801026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:00.898 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.898 INFO: Log level set to 20 00:36:00.898 INFO: Requests: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "method": "nvmf_set_config", 00:36:00.898 "id": 1, 00:36:00.898 "params": { 00:36:00.898 "admin_cmd_passthru": { 00:36:00.898 "identify_ctrlr": true 00:36:00.898 } 00:36:00.898 } 00:36:00.898 } 00:36:00.898 00:36:00.898 INFO: response: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "id": 1, 00:36:00.898 "result": true 00:36:00.898 } 00:36:00.898 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.898 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.898 INFO: Setting log level to 20 00:36:00.898 INFO: Setting log level to 20 00:36:00.898 INFO: Log level set to 20 00:36:00.898 INFO: Log level set to 20 00:36:00.898 INFO: Requests: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "method": "framework_start_init", 00:36:00.898 "id": 1 00:36:00.898 } 00:36:00.898 00:36:00.898 INFO: Requests: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "method": "framework_start_init", 00:36:00.898 "id": 1 00:36:00.898 } 00:36:00.898 00:36:00.898 [2024-11-26 19:26:17.959148] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:00.898 INFO: response: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "id": 1, 00:36:00.898 "result": true 00:36:00.898 } 00:36:00.898 00:36:00.898 INFO: response: 00:36:00.898 { 00:36:00.898 "jsonrpc": "2.0", 00:36:00.898 "id": 1, 00:36:00.898 "result": true 00:36:00.898 } 00:36:00.898 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.898 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.898 INFO: Setting log level to 40 00:36:00.898 INFO: Setting log level to 40 00:36:00.898 INFO: Setting log level to 40 00:36:00.898 [2024-11-26 19:26:17.968779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.898 19:26:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.898 19:26:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.898 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:00.898 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.898 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.161 Nvme0n1 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.161 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.161 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.161 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.161 [2024-11-26 19:26:18.364986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.161 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.161 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.423 [ 00:36:01.423 { 00:36:01.423 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:01.423 "subtype": "Discovery", 00:36:01.423 "listen_addresses": [], 00:36:01.423 "allow_any_host": true, 00:36:01.423 "hosts": [] 00:36:01.423 }, 00:36:01.423 { 00:36:01.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.423 "subtype": "NVMe", 00:36:01.423 "listen_addresses": [ 00:36:01.423 { 00:36:01.423 "trtype": "TCP", 00:36:01.423 "adrfam": "IPv4", 00:36:01.423 "traddr": "10.0.0.2", 00:36:01.423 "trsvcid": "4420" 00:36:01.423 } 00:36:01.423 ], 00:36:01.423 "allow_any_host": true, 00:36:01.423 "hosts": [], 00:36:01.423 "serial_number": "SPDK00000000000001", 00:36:01.423 "model_number": "SPDK bdev Controller", 00:36:01.423 "max_namespaces": 1, 00:36:01.423 "min_cntlid": 1, 00:36:01.423 "max_cntlid": 65519, 00:36:01.423 "namespaces": [ 00:36:01.423 { 00:36:01.423 "nsid": 1, 00:36:01.423 "bdev_name": "Nvme0n1", 00:36:01.423 "name": "Nvme0n1", 00:36:01.423 "nguid": "36344730526054870025384500000044", 00:36:01.423 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:01.423 } 00:36:01.423 ] 00:36:01.423 } 00:36:01.423 ] 00:36:01.423 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.423 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.423 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:01.423 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.684 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.684 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.684 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:01.684 19:26:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.684 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.684 rmmod nvme_tcp 00:36:01.684 rmmod nvme_fabrics 00:36:02.010 rmmod nvme_keyring 00:36:02.010 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.010 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:02.010 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:02.010 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3240278 ']' 00:36:02.010 19:26:18 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3240278 00:36:02.010 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3240278 ']' 00:36:02.010 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3240278 00:36:02.010 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:02.010 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.010 19:26:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3240278 00:36:02.010 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:02.010 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:02.010 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3240278' 00:36:02.010 killing process with pid 3240278 00:36:02.010 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3240278 00:36:02.010 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3240278 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.313 19:26:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.314 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.314 19:26:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.227 19:26:21 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.227 00:36:04.227 real 0m12.788s 00:36:04.227 user 0m8.110s 00:36:04.227 sys 0m6.864s 00:36:04.227 19:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.227 19:26:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:04.227 ************************************ 00:36:04.227 END TEST nvmf_identify_passthru 00:36:04.227 ************************************ 00:36:04.227 19:26:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:04.227 19:26:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:04.227 19:26:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.227 19:26:21 -- common/autotest_common.sh@10 -- # set +x 00:36:04.488 ************************************ 00:36:04.488 START TEST nvmf_dif 00:36:04.488 ************************************ 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:04.488 * Looking for test storage... 00:36:04.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.488 --rc genhtml_branch_coverage=1 00:36:04.488 --rc genhtml_function_coverage=1 00:36:04.488 --rc genhtml_legend=1 00:36:04.488 --rc geninfo_all_blocks=1 00:36:04.488 --rc geninfo_unexecuted_blocks=1 00:36:04.488 00:36:04.488 ' 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.488 --rc genhtml_branch_coverage=1 00:36:04.488 --rc genhtml_function_coverage=1 00:36:04.488 --rc genhtml_legend=1 00:36:04.488 --rc geninfo_all_blocks=1 00:36:04.488 --rc geninfo_unexecuted_blocks=1 00:36:04.488 00:36:04.488 ' 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.488 --rc genhtml_branch_coverage=1 00:36:04.488 --rc genhtml_function_coverage=1 00:36:04.488 --rc genhtml_legend=1 00:36:04.488 --rc geninfo_all_blocks=1 00:36:04.488 --rc geninfo_unexecuted_blocks=1 00:36:04.488 00:36:04.488 ' 00:36:04.488 19:26:21 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:04.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.488 --rc genhtml_branch_coverage=1 00:36:04.488 --rc genhtml_function_coverage=1 00:36:04.488 --rc genhtml_legend=1 00:36:04.488 --rc geninfo_all_blocks=1 00:36:04.488 --rc geninfo_unexecuted_blocks=1 00:36:04.488 00:36:04.488 ' 00:36:04.488 19:26:21 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.488 19:26:21 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.488 19:26:21 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.750 19:26:21 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.750 19:26:21 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.750 19:26:21 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.750 19:26:21 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.750 19:26:21 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.750 19:26:21 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:04.750 19:26:21 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.750 19:26:21 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:04.750 19:26:21 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:04.750 19:26:21 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:04.750 19:26:21 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:04.750 19:26:21 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.750 19:26:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:04.750 19:26:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:04.750 19:26:21 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.750 19:26:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:12.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:12.888 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:12.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:12.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:12.888 19:26:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:12.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:36:12.888 00:36:12.888 --- 10.0.0.2 ping statistics --- 00:36:12.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.888 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:12.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:36:12.888 00:36:12.888 --- 10.0.0.1 ping statistics --- 00:36:12.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.888 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:12.888 19:26:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:15.434 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:15.434 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:15.434 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.695 19:26:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:15.955 19:26:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:15.955 19:26:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:15.955 19:26:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.955 19:26:32 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.955 19:26:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.955 19:26:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3246135 00:36:15.955 19:26:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3246135 00:36:15.956 19:26:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3246135 ']' 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.956 19:26:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:15.956 [2024-11-26 19:26:33.009893] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:36:15.956 [2024-11-26 19:26:33.009958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.956 [2024-11-26 19:26:33.111956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.956 [2024-11-26 19:26:33.163098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.956 [2024-11-26 19:26:33.163151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.956 [2024-11-26 19:26:33.163169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.956 [2024-11-26 19:26:33.163177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.956 [2024-11-26 19:26:33.163184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.956 [2024-11-26 19:26:33.163955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:16.897 19:26:33 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 19:26:33 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.897 19:26:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:16.897 19:26:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 [2024-11-26 19:26:33.855931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.897 19:26:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 ************************************ 00:36:16.897 START TEST fio_dif_1_default 00:36:16.897 ************************************ 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 bdev_null0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:16.897 [2024-11-26 19:26:33.940309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:16.897 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:16.897 { 00:36:16.897 "params": { 00:36:16.897 "name": "Nvme$subsystem", 00:36:16.897 "trtype": "$TEST_TRANSPORT", 00:36:16.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:16.898 "adrfam": "ipv4", 00:36:16.898 "trsvcid": "$NVMF_PORT", 00:36:16.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:16.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:16.898 "hdgst": ${hdgst:-false}, 00:36:16.898 "ddgst": ${ddgst:-false} 00:36:16.898 }, 00:36:16.898 "method": "bdev_nvme_attach_controller" 00:36:16.898 } 00:36:16.898 EOF 00:36:16.898 )") 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:16.898 "params": { 00:36:16.898 "name": "Nvme0", 00:36:16.898 "trtype": "tcp", 00:36:16.898 "traddr": "10.0.0.2", 00:36:16.898 "adrfam": "ipv4", 00:36:16.898 "trsvcid": "4420", 00:36:16.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:16.898 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:16.898 "hdgst": false, 00:36:16.898 "ddgst": false 00:36:16.898 }, 00:36:16.898 "method": "bdev_nvme_attach_controller" 00:36:16.898 }' 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:16.898 19:26:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:16.898 19:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:16.898 19:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:16.898 19:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:16.898 19:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.465 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:17.465 fio-3.35 00:36:17.465 Starting 1 thread 00:36:29.733 00:36:29.733 filename0: (groupid=0, jobs=1): err= 0: pid=3246667: Tue Nov 26 19:26:45 2024 00:36:29.733 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10026msec) 00:36:29.734 slat (nsec): min=5498, max=32832, avg=6529.13, stdev=1752.48 00:36:29.734 clat (usec): min=40897, max=42439, avg=41070.31, stdev=286.21 00:36:29.734 lat (usec): min=40903, max=42472, avg=41076.84, stdev=286.94 00:36:29.734 clat percentiles (usec): 00:36:29.734 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:29.734 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:29.734 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:29.734 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:29.734 | 99.99th=[42206] 00:36:29.734 bw ( KiB/s): min= 352, max= 416, per=99.64%, avg=388.80, stdev=15.66, samples=20 00:36:29.734 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:36:29.734 lat (msec) : 50=100.00% 00:36:29.734 cpu : usr=93.94%, sys=5.85%, ctx=13, majf=0, minf=230 00:36:29.734 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:29.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:29.734 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:29.734 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:29.734 00:36:29.734 Run status group 0 (all jobs): 00:36:29.734 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10026-10026msec 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 00:36:29.734 real 0m11.276s 00:36:29.734 user 0m23.145s 00:36:29.734 sys 0m0.891s 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 ************************************ 00:36:29.734 END TEST fio_dif_1_default 00:36:29.734 ************************************ 00:36:29.734 19:26:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:29.734 19:26:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:29.734 19:26:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 ************************************ 00:36:29.734 START TEST fio_dif_1_multi_subsystems 00:36:29.734 ************************************ 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 bdev_null0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 [2024-11-26 19:26:45.300624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 bdev_null1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.734 { 00:36:29.734 "params": { 00:36:29.734 "name": "Nvme$subsystem", 00:36:29.734 "trtype": "$TEST_TRANSPORT", 00:36:29.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.734 "adrfam": "ipv4", 00:36:29.734 "trsvcid": "$NVMF_PORT", 00:36:29.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.734 "hdgst": ${hdgst:-false}, 00:36:29.734 "ddgst": ${ddgst:-false} 00:36:29.734 }, 00:36:29.734 "method": "bdev_nvme_attach_controller" 00:36:29.734 } 00:36:29.734 EOF 00:36:29.734 )") 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.734 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:29.735 { 00:36:29.735 "params": { 00:36:29.735 "name": "Nvme$subsystem", 00:36:29.735 "trtype": "$TEST_TRANSPORT", 00:36:29.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:29.735 "adrfam": "ipv4", 00:36:29.735 "trsvcid": "$NVMF_PORT", 00:36:29.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:29.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:29.735 "hdgst": ${hdgst:-false}, 00:36:29.735 "ddgst": ${ddgst:-false} 00:36:29.735 }, 00:36:29.735 "method": "bdev_nvme_attach_controller" 00:36:29.735 } 00:36:29.735 EOF 00:36:29.735 )") 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:29.735 "params": { 00:36:29.735 "name": "Nvme0", 00:36:29.735 "trtype": "tcp", 00:36:29.735 "traddr": "10.0.0.2", 00:36:29.735 "adrfam": "ipv4", 00:36:29.735 "trsvcid": "4420", 00:36:29.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.735 "hdgst": false, 00:36:29.735 "ddgst": false 00:36:29.735 }, 00:36:29.735 "method": "bdev_nvme_attach_controller" 00:36:29.735 },{ 00:36:29.735 "params": { 00:36:29.735 "name": "Nvme1", 00:36:29.735 "trtype": "tcp", 00:36:29.735 "traddr": "10.0.0.2", 00:36:29.735 "adrfam": "ipv4", 00:36:29.735 "trsvcid": "4420", 00:36:29.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:29.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:29.735 "hdgst": false, 00:36:29.735 "ddgst": false 00:36:29.735 }, 00:36:29.735 "method": "bdev_nvme_attach_controller" 00:36:29.735 }' 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:29.735 19:26:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.735 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:29.735 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:29.735 fio-3.35 00:36:29.735 Starting 2 threads 00:36:39.736 00:36:39.736 filename0: (groupid=0, jobs=1): err= 0: pid=3249071: Tue Nov 26 19:26:56 2024 00:36:39.736 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:36:39.736 slat (nsec): min=5480, max=33357, avg=6480.96, stdev=1775.99 00:36:39.736 clat (usec): min=40856, max=42322, avg=41065.80, stdev=281.72 00:36:39.737 lat (usec): min=40864, max=42356, avg=41072.28, stdev=282.38 00:36:39.737 clat percentiles (usec): 00:36:39.737 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:39.737 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:39.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:39.737 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:39.737 | 99.99th=[42206] 00:36:39.737 bw ( KiB/s): min= 384, max= 416, per=33.76%, avg=388.80, stdev=11.72, samples=20 00:36:39.737 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:39.737 lat (msec) : 50=100.00% 00:36:39.737 cpu : usr=95.22%, sys=4.58%, ctx=12, majf=0, minf=178 00:36:39.737 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.737 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.737 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:39.737 filename1: (groupid=0, jobs=1): err= 0: pid=3249072: Tue Nov 26 19:26:56 2024 00:36:39.737 read: IOPS=190, BW=761KiB/s (780kB/s)(7616KiB/10004msec) 00:36:39.737 slat (nsec): min=5480, max=34771, avg=6312.25, stdev=1524.55 00:36:39.737 clat (usec): min=443, max=42290, avg=20997.72, stdev=20163.55 00:36:39.737 lat (usec): min=448, max=42296, avg=21004.03, stdev=20163.48 00:36:39.737 clat percentiles (usec): 00:36:39.737 | 1.00th=[ 578], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 840], 00:36:39.737 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 1123], 60.00th=[41157], 00:36:39.737 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:39.737 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:39.737 | 99.99th=[42206] 00:36:39.737 bw ( KiB/s): min= 704, max= 768, per=66.31%, avg=762.95, stdev=16.05, samples=19 00:36:39.737 iops : min= 176, max= 192, avg=190.74, stdev= 4.01, samples=19 00:36:39.737 lat (usec) : 500=0.42%, 750=1.89%, 1000=45.69% 00:36:39.737 lat (msec) : 2=2.00%, 50=50.00% 00:36:39.737 cpu : usr=95.38%, sys=4.41%, ctx=9, majf=0, minf=104 00:36:39.737 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.737 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.737 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:39.737 00:36:39.737 Run status group 0 (all jobs): 00:36:39.737 READ: bw=1149KiB/s (1177kB/s), 389KiB/s-761KiB/s (399kB/s-780kB/s), io=11.2MiB (11.8MB), run=10004-10025msec 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 00:36:39.737 real 0m11.493s 00:36:39.737 user 0m36.849s 00:36:39.737 sys 0m1.271s 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 ************************************ 00:36:39.737 END TEST fio_dif_1_multi_subsystems 00:36:39.737 ************************************ 00:36:39.737 19:26:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:39.737 19:26:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:39.737 19:26:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 ************************************ 00:36:39.737 START TEST fio_dif_rand_params 00:36:39.737 ************************************ 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 bdev_null0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.737 [2024-11-26 19:26:56.875403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.737 { 00:36:39.737 "params": { 00:36:39.737 "name": "Nvme$subsystem", 00:36:39.737 "trtype": "$TEST_TRANSPORT", 00:36:39.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.737 "adrfam": "ipv4", 00:36:39.737 "trsvcid": "$NVMF_PORT", 00:36:39.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.737 "hdgst": ${hdgst:-false}, 00:36:39.737 "ddgst": ${ddgst:-false} 00:36:39.737 }, 00:36:39.737 "method": "bdev_nvme_attach_controller" 00:36:39.737 } 00:36:39.737 EOF 00:36:39.737 )") 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:39.737 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.738 "params": { 00:36:39.738 "name": "Nvme0", 00:36:39.738 "trtype": "tcp", 00:36:39.738 "traddr": "10.0.0.2", 00:36:39.738 "adrfam": "ipv4", 00:36:39.738 "trsvcid": "4420", 00:36:39.738 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.738 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:39.738 "hdgst": false, 00:36:39.738 "ddgst": false 00:36:39.738 }, 00:36:39.738 "method": "bdev_nvme_attach_controller" 00:36:39.738 }' 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:39.738 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.022 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.022 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.022 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.022 19:26:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.292 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:40.292 ... 00:36:40.292 fio-3.35 00:36:40.292 Starting 3 threads 00:36:46.871 00:36:46.871 filename0: (groupid=0, jobs=1): err= 0: pid=3251382: Tue Nov 26 19:27:02 2024 00:36:46.871 read: IOPS=304, BW=38.0MiB/s (39.9MB/s)(192MiB/5046msec) 00:36:46.871 slat (nsec): min=5551, max=36861, avg=7492.63, stdev=1780.13 00:36:46.871 clat (usec): min=4571, max=86852, avg=9816.59, stdev=5541.81 00:36:46.871 lat (usec): min=4580, max=86861, avg=9824.08, stdev=5542.02 00:36:46.871 clat percentiles (usec): 00:36:46.871 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7570], 00:36:46.871 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:36:46.871 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:36:46.871 | 99.00th=[47973], 99.50th=[49021], 99.90th=[52691], 99.95th=[86508], 00:36:46.871 | 99.99th=[86508] 00:36:46.871 bw ( KiB/s): min=25344, max=46592, per=32.67%, avg=39270.40, stdev=6045.07, samples=10 00:36:46.871 iops : min= 198, max= 364, avg=306.80, stdev=47.23, samples=10 00:36:46.871 lat (msec) : 10=62.43%, 20=35.94%, 50=1.37%, 100=0.26% 00:36:46.871 cpu : usr=93.88%, sys=5.85%, ctx=9, majf=0, minf=90 00:36:46.872 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.872 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.872 filename0: (groupid=0, jobs=1): err= 0: pid=3251383: Tue Nov 26 19:27:02 2024 00:36:46.872 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(203MiB/5004msec) 00:36:46.872 slat (usec): min=5, max=268, avg= 7.42, stdev= 6.68 00:36:46.872 clat (usec): min=4146, max=88516, avg=9239.01, stdev=8126.56 00:36:46.872 lat (usec): min=4154, max=88525, avg=9246.43, stdev=8126.77 00:36:46.872 clat percentiles (usec): 00:36:46.872 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6718], 00:36:46.872 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7898], 00:36:46.872 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10552], 00:36:46.872 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[88605], 00:36:46.872 | 99.99th=[88605] 00:36:46.872 bw ( KiB/s): min=27136, max=50432, per=34.52%, avg=41497.60, stdev=7132.82, samples=10 00:36:46.872 iops : min= 212, max= 394, avg=324.20, stdev=55.73, samples=10 00:36:46.872 lat (msec) : 10=93.28%, 20=2.71%, 50=3.82%, 100=0.18% 00:36:46.872 cpu : usr=94.78%, sys=4.94%, ctx=11, majf=0, minf=150 00:36:46.872 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.872 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.872 filename0: (groupid=0, jobs=1): err= 0: pid=3251384: Tue Nov 26 19:27:02 2024 00:36:46.872 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5044msec) 00:36:46.872 slat (nsec): min=5531, max=34987, avg=7690.84, stdev=1793.54 00:36:46.872 clat (usec): min=4675, max=50138, avg=9540.33, stdev=4413.23 00:36:46.872 lat (usec): min=4683, max=50145, avg=9548.02, stdev=4413.11 00:36:46.872 clat percentiles (usec): 00:36:46.872 | 1.00th=[ 5276], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7439], 00:36:46.872 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[ 9896], 00:36:46.872 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[11994], 00:36:46.872 | 99.00th=[46400], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:36:46.872 | 99.99th=[50070] 00:36:46.872 bw ( KiB/s): min=37376, max=43008, per=33.60%, avg=40388.20, stdev=2233.14, samples=10 00:36:46.872 iops : min= 292, max= 336, avg=315.50, stdev=17.41, samples=10 00:36:46.872 lat (msec) : 10=62.59%, 20=36.33%, 50=0.95%, 100=0.13% 00:36:46.872 cpu : usr=93.81%, sys=5.93%, ctx=9, majf=0, minf=110 00:36:46.872 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.872 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.872 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:46.872 00:36:46.872 Run status group 0 (all jobs): 00:36:46.872 READ: bw=117MiB/s (123MB/s), 38.0MiB/s-40.5MiB/s (39.9MB/s-42.5MB/s), io=592MiB (621MB), run=5004-5046msec 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 bdev_null0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 [2024-11-26 19:27:03.188037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 bdev_null1 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 bdev_null2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:46.872 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:46.873 { 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme$subsystem", 00:36:46.873 "trtype": "$TEST_TRANSPORT", 00:36:46.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "$NVMF_PORT", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.873 "hdgst": ${hdgst:-false}, 00:36:46.873 "ddgst": ${ddgst:-false} 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 } 00:36:46.873 EOF 00:36:46.873 )") 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:46.873 { 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme$subsystem", 00:36:46.873 "trtype": "$TEST_TRANSPORT", 00:36:46.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "$NVMF_PORT", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.873 "hdgst": ${hdgst:-false}, 00:36:46.873 "ddgst": ${ddgst:-false} 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 } 00:36:46.873 EOF 00:36:46.873 )") 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:46.873 { 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme$subsystem", 00:36:46.873 "trtype": "$TEST_TRANSPORT", 00:36:46.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "$NVMF_PORT", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.873 "hdgst": ${hdgst:-false}, 00:36:46.873 "ddgst": ${ddgst:-false} 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 } 00:36:46.873 EOF 00:36:46.873 )") 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme0", 00:36:46.873 "trtype": "tcp", 00:36:46.873 "traddr": "10.0.0.2", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "4420", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.873 "hdgst": false, 00:36:46.873 "ddgst": false 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 },{ 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme1", 00:36:46.873 "trtype": "tcp", 00:36:46.873 "traddr": "10.0.0.2", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "4420", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:46.873 "hdgst": false, 00:36:46.873 "ddgst": false 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 },{ 00:36:46.873 "params": { 00:36:46.873 "name": "Nvme2", 00:36:46.873 "trtype": "tcp", 00:36:46.873 "traddr": "10.0.0.2", 00:36:46.873 "adrfam": "ipv4", 00:36:46.873 "trsvcid": "4420", 00:36:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:46.873 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:46.873 "hdgst": false, 00:36:46.873 "ddgst": false 00:36:46.873 }, 00:36:46.873 "method": "bdev_nvme_attach_controller" 00:36:46.873 }' 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:46.873 19:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.873 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:46.873 ... 00:36:46.873 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:46.873 ... 00:36:46.873 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:46.873 ... 00:36:46.873 fio-3.35 00:36:46.873 Starting 24 threads 00:36:59.097 00:36:59.097 filename0: (groupid=0, jobs=1): err= 0: pid=3252883: Tue Nov 26 19:27:14 2024 00:36:59.097 read: IOPS=690, BW=2762KiB/s (2828kB/s)(27.0MiB/10020msec) 00:36:59.097 slat (nsec): min=5669, max=66577, avg=9769.68, stdev=5977.51 00:36:59.097 clat (usec): min=1246, max=29413, avg=23089.66, stdev=4025.73 00:36:59.097 lat (usec): min=1258, max=29422, avg=23099.43, stdev=4024.50 00:36:59.097 clat percentiles (usec): 00:36:59.097 | 1.00th=[ 1532], 5.00th=[15795], 10.00th=[23462], 20.00th=[23725], 00:36:59.097 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.097 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.097 | 99.00th=[25297], 99.50th=[25297], 99.90th=[29492], 99.95th=[29492], 00:36:59.097 | 99.99th=[29492] 00:36:59.098 bw ( KiB/s): min= 2560, max= 4280, per=4.32%, avg=2761.20, stdev=363.41, samples=20 00:36:59.098 iops : min= 640, max= 1070, avg=690.30, stdev=90.85, samples=20 00:36:59.098 lat (msec) : 2=1.88%, 4=0.53%, 10=0.81%, 20=3.35%, 50=93.42% 00:36:59.098 cpu : usr=98.69%, sys=0.91%, ctx=94, majf=0, minf=44 00:36:59.098 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252885: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.2MiB/10018msec) 00:36:59.098 slat (nsec): min=5670, max=72065, avg=11419.05, stdev=7330.47 00:36:59.098 clat (usec): min=7330, max=27603, avg=23809.92, stdev=1715.28 00:36:59.098 lat (usec): min=7349, max=27614, avg=23821.34, stdev=1713.87 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[12387], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.098 | 99.00th=[25297], 99.50th=[25560], 99.90th=[27657], 99.95th=[27657], 00:36:59.098 | 99.99th=[27657] 00:36:59.098 bw ( KiB/s): min= 2560, max= 2944, per=4.19%, avg=2675.20, stdev=91.93, samples=20 00:36:59.098 iops : min= 640, max= 736, avg=668.80, stdev=22.98, samples=20 00:36:59.098 lat (msec) : 10=0.49%, 20=1.66%, 50=97.85% 00:36:59.098 cpu : usr=98.51%, sys=1.01%, ctx=92, majf=0, minf=27 00:36:59.098 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252886: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10018msec) 00:36:59.098 slat (nsec): min=5666, max=58967, avg=8519.12, stdev=5034.02 00:36:59.098 clat (usec): min=8746, max=27601, avg=23889.33, stdev=1585.17 00:36:59.098 lat (usec): min=8762, max=27610, avg=23897.85, stdev=1583.39 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[12125], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:59.098 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.098 | 99.00th=[25297], 99.50th=[25560], 99.90th=[27657], 99.95th=[27657], 00:36:59.098 | 99.99th=[27657] 00:36:59.098 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2668.80, stdev=85.87, samples=20 00:36:59.098 iops : min= 640, max= 736, avg=667.20, stdev=21.47, samples=20 00:36:59.098 lat (msec) : 10=0.48%, 20=0.96%, 50=98.56% 00:36:59.098 cpu : usr=98.91%, sys=0.80%, ctx=13, majf=0, minf=36 00:36:59.098 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252887: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10004msec) 00:36:59.098 slat (nsec): min=5685, max=77004, avg=11879.08, stdev=7909.75 00:36:59.098 clat (usec): min=4749, max=29402, avg=23832.97, stdev=1815.80 00:36:59.098 lat (usec): min=4759, max=29420, avg=23844.85, stdev=1815.05 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[10028], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:36:59.098 | 99.00th=[25297], 99.50th=[25560], 99.90th=[29230], 99.95th=[29492], 00:36:59.098 | 99.99th=[29492] 00:36:59.098 bw ( KiB/s): min= 2560, max= 3072, per=4.19%, avg=2674.53, stdev=112.03, samples=19 00:36:59.098 iops : min= 640, max= 768, avg=668.63, stdev=28.01, samples=19 00:36:59.098 lat (msec) : 10=0.97%, 20=0.46%, 50=98.56% 00:36:59.098 cpu : usr=98.98%, sys=0.73%, ctx=13, majf=0, minf=15 00:36:59.098 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252888: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=660, BW=2643KiB/s (2706kB/s)(25.9MiB/10049msec) 00:36:59.098 slat (nsec): min=5679, max=86012, avg=17389.13, stdev=13594.02 00:36:59.098 clat (usec): min=14895, max=56395, avg=24010.05, stdev=1318.97 00:36:59.098 lat (usec): min=14902, max=56401, avg=24027.44, stdev=1318.17 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[22152], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:36:59.098 | 99.00th=[25560], 99.50th=[27395], 99.90th=[33162], 99.95th=[56361], 00:36:59.098 | 99.99th=[56361] 00:36:59.098 bw ( KiB/s): min= 2495, max= 2688, per=4.14%, avg=2646.35, stdev=66.68, samples=20 00:36:59.098 iops : min= 623, max= 672, avg=661.55, stdev=16.76, samples=20 00:36:59.098 lat (msec) : 20=0.62%, 50=99.29%, 100=0.09% 00:36:59.098 cpu : usr=98.52%, sys=1.00%, ctx=144, majf=0, minf=35 00:36:59.098 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252889: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=663, BW=2655KiB/s (2719kB/s)(26.0MiB/10012msec) 00:36:59.098 slat (nsec): min=5668, max=82589, avg=16111.07, stdev=11309.39 00:36:59.098 clat (usec): min=9775, max=38209, avg=23964.60, stdev=1626.92 00:36:59.098 lat (usec): min=9811, max=38217, avg=23980.71, stdev=1626.86 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:59.098 | 99.00th=[27657], 99.50th=[33162], 99.90th=[33817], 99.95th=[38011], 00:36:59.098 | 99.99th=[38011] 00:36:59.098 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2656.84, stdev=48.73, samples=19 00:36:59.098 iops : min= 640, max= 672, avg=664.21, stdev=12.18, samples=19 00:36:59.098 lat (msec) : 10=0.11%, 20=1.41%, 50=98.48% 00:36:59.098 cpu : usr=98.86%, sys=0.81%, ctx=82, majf=0, minf=23 00:36:59.098 IO depths : 1=5.1%, 2=10.3%, 4=21.8%, 8=54.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=93.5%, 8=1.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252890: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10003msec) 00:36:59.098 slat (nsec): min=5668, max=90393, avg=22514.72, stdev=15220.02 00:36:59.098 clat (usec): min=9691, max=40938, avg=23906.11, stdev=1382.94 00:36:59.098 lat (usec): min=9698, max=40957, avg=23928.62, stdev=1382.58 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:59.098 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.098 | 99.00th=[25822], 99.50th=[27395], 99.90th=[40633], 99.95th=[41157], 00:36:59.098 | 99.99th=[41157] 00:36:59.098 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2647.58, stdev=61.13, samples=19 00:36:59.098 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:36:59.098 lat (msec) : 10=0.24%, 20=0.57%, 50=99.19% 00:36:59.098 cpu : usr=98.95%, sys=0.69%, ctx=74, majf=0, minf=26 00:36:59.098 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=3252891: Tue Nov 26 19:27:14 2024 00:36:59.098 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10005msec) 00:36:59.098 slat (nsec): min=5495, max=93206, avg=15676.54, stdev=12786.09 00:36:59.098 clat (usec): min=5506, max=53178, avg=23743.29, stdev=3888.88 00:36:59.098 lat (usec): min=5512, max=53198, avg=23758.97, stdev=3890.21 00:36:59.098 clat percentiles (usec): 00:36:59.098 | 1.00th=[14484], 5.00th=[17171], 10.00th=[19268], 20.00th=[21365], 00:36:59.098 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.098 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27919], 95.00th=[29754], 00:36:59.098 | 99.00th=[37487], 99.50th=[38536], 99.90th=[41157], 99.95th=[53216], 00:36:59.098 | 99.99th=[53216] 00:36:59.098 bw ( KiB/s): min= 2475, max= 2848, per=4.19%, avg=2678.47, stdev=93.52, samples=19 00:36:59.098 iops : min= 618, max= 712, avg=669.58, stdev=23.47, samples=19 00:36:59.098 lat (msec) : 10=0.24%, 20=14.07%, 50=85.62%, 100=0.07% 00:36:59.098 cpu : usr=98.46%, sys=1.15%, ctx=27, majf=0, minf=34 00:36:59.098 IO depths : 1=1.1%, 2=2.3%, 4=7.5%, 8=75.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:36:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 complete : 0=0.0%, 4=89.6%, 8=7.3%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.098 issued rwts: total=6716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252892: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=659, BW=2639KiB/s (2703kB/s)(25.8MiB/10003msec) 00:36:59.099 slat (nsec): min=5585, max=83756, avg=17307.74, stdev=14015.59 00:36:59.099 clat (usec): min=8343, max=67063, avg=24149.26, stdev=3357.21 00:36:59.099 lat (usec): min=8349, max=67088, avg=24166.57, stdev=3357.17 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[16712], 5.00th=[19530], 10.00th=[21890], 20.00th=[23725], 00:36:59.099 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26084], 95.00th=[28967], 00:36:59.099 | 99.00th=[32900], 99.50th=[36439], 99.90th=[66847], 99.95th=[66847], 00:36:59.099 | 99.99th=[66847] 00:36:59.099 bw ( KiB/s): min= 2432, max= 2720, per=4.12%, avg=2631.58, stdev=70.63, samples=19 00:36:59.099 iops : min= 608, max= 680, avg=657.89, stdev=17.66, samples=19 00:36:59.099 lat (msec) : 10=0.26%, 20=6.26%, 50=93.24%, 100=0.24% 00:36:59.099 cpu : usr=98.66%, sys=0.88%, ctx=65, majf=0, minf=36 00:36:59.099 IO depths : 1=1.2%, 2=2.4%, 4=6.2%, 8=75.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=90.1%, 8=7.9%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252894: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10017msec) 00:36:59.099 slat (nsec): min=5654, max=85699, avg=21307.44, stdev=14159.74 00:36:59.099 clat (usec): min=8925, max=31550, avg=23792.93, stdev=1578.71 00:36:59.099 lat (usec): min=8948, max=31558, avg=23814.24, stdev=1579.43 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[12125], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.099 | 99.00th=[25297], 99.50th=[27395], 99.90th=[31589], 99.95th=[31589], 00:36:59.099 | 99.99th=[31589] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2869, per=4.17%, avg=2665.05, stdev=74.04, samples=20 00:36:59.099 iops : min= 640, max= 717, avg=666.25, stdev=18.47, samples=20 00:36:59.099 lat (msec) : 10=0.21%, 20=1.17%, 50=98.62% 00:36:59.099 cpu : usr=98.86%, sys=0.83%, ctx=38, majf=0, minf=39 00:36:59.099 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252895: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10006msec) 00:36:59.099 slat (nsec): min=5674, max=92124, avg=19352.90, stdev=15676.10 00:36:59.099 clat (usec): min=15111, max=33398, avg=23947.51, stdev=850.66 00:36:59.099 lat (usec): min=15116, max=33405, avg=23966.86, stdev=848.59 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.099 | 99.00th=[25560], 99.50th=[27657], 99.90th=[30540], 99.95th=[30540], 00:36:59.099 | 99.99th=[33424] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2654.32, stdev=57.91, samples=19 00:36:59.099 iops : min= 640, max= 672, avg=663.58, stdev=14.48, samples=19 00:36:59.099 lat (msec) : 20=0.57%, 50=99.43% 00:36:59.099 cpu : usr=99.04%, sys=0.65%, ctx=34, majf=0, minf=28 00:36:59.099 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252896: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=668, BW=2676KiB/s (2740kB/s)(26.1MiB/10003msec) 00:36:59.099 slat (usec): min=5, max=101, avg=22.11, stdev=16.75 00:36:59.099 clat (usec): min=8329, max=39785, avg=23717.28, stdev=2778.13 00:36:59.099 lat (usec): min=8335, max=39802, avg=23739.39, stdev=2779.86 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[15270], 5.00th=[18744], 10.00th=[22152], 20.00th=[23462], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:59.099 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[26870], 00:36:59.099 | 99.00th=[33817], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:36:59.099 | 99.99th=[39584] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2880, per=4.16%, avg=2659.63, stdev=78.69, samples=19 00:36:59.099 iops : min= 640, max= 720, avg=664.89, stdev=19.69, samples=19 00:36:59.099 lat (msec) : 10=0.15%, 20=7.13%, 50=92.72% 00:36:59.099 cpu : usr=98.72%, sys=0.91%, ctx=32, majf=0, minf=26 00:36:59.099 IO depths : 1=4.0%, 2=8.4%, 4=19.0%, 8=59.3%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252897: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10004msec) 00:36:59.099 slat (nsec): min=5672, max=94987, avg=19253.00, stdev=14925.66 00:36:59.099 clat (usec): min=8856, max=28113, avg=23815.67, stdev=1456.61 00:36:59.099 lat (usec): min=8872, max=28121, avg=23834.92, stdev=1456.64 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[15795], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.099 | 99.00th=[25560], 99.50th=[27395], 99.90th=[27657], 99.95th=[28181], 00:36:59.099 | 99.99th=[28181] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2949, per=4.18%, avg=2668.05, stdev=88.97, samples=19 00:36:59.099 iops : min= 640, max= 737, avg=667.00, stdev=22.20, samples=19 00:36:59.099 lat (msec) : 10=0.34%, 20=1.23%, 50=98.43% 00:36:59.099 cpu : usr=98.76%, sys=0.77%, ctx=123, majf=0, minf=28 00:36:59.099 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252898: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=663, BW=2653KiB/s (2716kB/s)(25.9MiB/10004msec) 00:36:59.099 slat (nsec): min=5537, max=80653, avg=21779.36, stdev=13545.55 00:36:59.099 clat (usec): min=8591, max=44829, avg=23931.19, stdev=1283.58 00:36:59.099 lat (usec): min=8597, max=44849, avg=23952.97, stdev=1283.44 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.099 | 99.00th=[25822], 99.50th=[27395], 99.90th=[40109], 99.95th=[40633], 00:36:59.099 | 99.99th=[44827] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2647.58, stdev=61.13, samples=19 00:36:59.099 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:36:59.099 lat (msec) : 10=0.15%, 20=0.45%, 50=99.40% 00:36:59.099 cpu : usr=98.58%, sys=0.95%, ctx=71, majf=0, minf=26 00:36:59.099 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252899: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10015msec) 00:36:59.099 slat (usec): min=5, max=100, avg=13.44, stdev=11.03 00:36:59.099 clat (usec): min=11297, max=42428, avg=23569.02, stdev=3545.47 00:36:59.099 lat (usec): min=11306, max=42443, avg=23582.46, stdev=3546.31 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[14615], 5.00th=[17695], 10.00th=[19006], 20.00th=[20841], 00:36:59.099 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27657], 95.00th=[29230], 00:36:59.099 | 99.00th=[34341], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:36:59.099 | 99.99th=[42206] 00:36:59.099 bw ( KiB/s): min= 2576, max= 2800, per=4.24%, avg=2707.20, stdev=59.10, samples=20 00:36:59.099 iops : min= 644, max= 700, avg=676.80, stdev=14.77, samples=20 00:36:59.099 lat (msec) : 20=16.67%, 50=83.33% 00:36:59.099 cpu : usr=98.72%, sys=0.98%, ctx=12, majf=0, minf=43 00:36:59.099 IO depths : 1=0.6%, 2=1.2%, 4=4.9%, 8=78.4%, 16=14.9%, 32=0.0%, >=64=0.0% 00:36:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 complete : 0=0.0%, 4=89.4%, 8=7.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.099 issued rwts: total=6778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=3252900: Tue Nov 26 19:27:14 2024 00:36:59.099 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10003msec) 00:36:59.099 slat (nsec): min=5666, max=87709, avg=21257.27, stdev=13020.41 00:36:59.099 clat (usec): min=9551, max=44021, avg=23916.14, stdev=1355.85 00:36:59.099 lat (usec): min=9557, max=44039, avg=23937.40, stdev=1355.71 00:36:59.099 clat percentiles (usec): 00:36:59.099 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.099 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.099 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.099 | 99.00th=[25822], 99.50th=[27395], 99.90th=[39584], 99.95th=[39584], 00:36:59.099 | 99.99th=[43779] 00:36:59.099 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2647.84, stdev=60.74, samples=19 00:36:59.099 iops : min= 640, max= 672, avg=661.95, stdev=15.20, samples=19 00:36:59.099 lat (msec) : 10=0.18%, 20=0.63%, 50=99.19% 00:36:59.100 cpu : usr=98.43%, sys=0.98%, ctx=167, majf=0, minf=31 00:36:59.100 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252901: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=666, BW=2665KiB/s (2728kB/s)(26.1MiB/10016msec) 00:36:59.100 slat (nsec): min=5697, max=73032, avg=12511.20, stdev=8732.86 00:36:59.100 clat (usec): min=9624, max=34797, avg=23890.19, stdev=1659.34 00:36:59.100 lat (usec): min=9633, max=34815, avg=23902.70, stdev=1659.27 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[13829], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.100 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.100 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:59.100 | 99.00th=[25560], 99.50th=[29230], 99.90th=[33817], 99.95th=[34341], 00:36:59.100 | 99.99th=[34866] 00:36:59.100 bw ( KiB/s): min= 2560, max= 2949, per=4.18%, avg=2668.25, stdev=85.51, samples=20 00:36:59.100 iops : min= 640, max= 737, avg=667.05, stdev=21.33, samples=20 00:36:59.100 lat (msec) : 10=0.42%, 20=1.14%, 50=98.44% 00:36:59.100 cpu : usr=98.96%, sys=0.75%, ctx=13, majf=0, minf=41 00:36:59.100 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252903: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10003msec) 00:36:59.100 slat (nsec): min=5536, max=86217, avg=19704.53, stdev=14307.68 00:36:59.100 clat (usec): min=5693, max=39540, avg=23871.55, stdev=2191.67 00:36:59.100 lat (usec): min=5699, max=39559, avg=23891.26, stdev=2192.34 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[15270], 5.00th=[22414], 10.00th=[23462], 20.00th=[23462], 00:36:59.100 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.100 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:36:59.100 | 99.00th=[32900], 99.50th=[34341], 99.90th=[39584], 99.95th=[39584], 00:36:59.100 | 99.99th=[39584] 00:36:59.100 bw ( KiB/s): min= 2436, max= 2832, per=4.16%, avg=2656.21, stdev=88.05, samples=19 00:36:59.100 iops : min= 609, max= 708, avg=664.05, stdev=22.01, samples=19 00:36:59.100 lat (msec) : 10=0.24%, 20=3.49%, 50=96.27% 00:36:59.100 cpu : usr=99.11%, sys=0.60%, ctx=13, majf=0, minf=41 00:36:59.100 IO depths : 1=4.1%, 2=8.9%, 4=20.0%, 8=57.5%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=93.1%, 8=2.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252904: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10008msec) 00:36:59.100 slat (nsec): min=5515, max=84542, avg=18203.77, stdev=13573.15 00:36:59.100 clat (usec): min=6759, max=40577, avg=23496.65, stdev=3211.29 00:36:59.100 lat (usec): min=6765, max=40591, avg=23514.85, stdev=3212.90 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[12387], 5.00th=[17433], 10.00th=[19792], 20.00th=[23462], 00:36:59.100 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:59.100 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[28443], 00:36:59.100 | 99.00th=[33162], 99.50th=[37487], 99.90th=[39584], 99.95th=[40633], 00:36:59.100 | 99.99th=[40633] 00:36:59.100 bw ( KiB/s): min= 2560, max= 2928, per=4.21%, avg=2692.47, stdev=88.61, samples=19 00:36:59.100 iops : min= 640, max= 732, avg=673.11, stdev=22.17, samples=19 00:36:59.100 lat (msec) : 10=0.56%, 20=10.37%, 50=89.07% 00:36:59.100 cpu : usr=98.97%, sys=0.72%, ctx=11, majf=0, minf=38 00:36:59.100 IO depths : 1=3.6%, 2=7.2%, 4=16.1%, 8=63.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=91.8%, 8=3.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252905: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10003msec) 00:36:59.100 slat (nsec): min=5503, max=93269, avg=23929.48, stdev=15277.73 00:36:59.100 clat (usec): min=10465, max=39429, avg=23828.45, stdev=1506.50 00:36:59.100 lat (usec): min=10471, max=39453, avg=23852.38, stdev=1507.38 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:36:59.100 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:59.100 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.100 | 99.00th=[25822], 99.50th=[27657], 99.90th=[39060], 99.95th=[39584], 00:36:59.100 | 99.99th=[39584] 00:36:59.100 bw ( KiB/s): min= 2560, max= 2688, per=4.15%, avg=2652.63, stdev=57.09, samples=19 00:36:59.100 iops : min= 640, max= 672, avg=663.16, stdev=14.27, samples=19 00:36:59.100 lat (msec) : 20=1.53%, 50=98.47% 00:36:59.100 cpu : usr=99.10%, sys=0.60%, ctx=11, majf=0, minf=27 00:36:59.100 IO depths : 1=5.8%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252906: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10013msec) 00:36:59.100 slat (nsec): min=5672, max=95937, avg=14966.26, stdev=11750.97 00:36:59.100 clat (usec): min=13930, max=31408, avg=23949.24, stdev=951.41 00:36:59.100 lat (usec): min=13936, max=31430, avg=23964.21, stdev=950.37 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[20317], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:59.100 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:59.100 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:36:59.100 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27919], 99.95th=[28181], 00:36:59.100 | 99.99th=[31327] 00:36:59.100 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2654.32, stdev=57.91, samples=19 00:36:59.100 iops : min= 640, max= 672, avg=663.58, stdev=14.48, samples=19 00:36:59.100 lat (msec) : 20=0.78%, 50=99.22% 00:36:59.100 cpu : usr=98.83%, sys=0.77%, ctx=49, majf=0, minf=54 00:36:59.100 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252907: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:36:59.100 slat (nsec): min=5675, max=93004, avg=21367.55, stdev=13878.60 00:36:59.100 clat (usec): min=11333, max=30978, avg=23915.35, stdev=958.54 00:36:59.100 lat (usec): min=11340, max=30995, avg=23936.71, stdev=958.46 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:59.100 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.100 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.100 | 99.00th=[25297], 99.50th=[25560], 99.90th=[30802], 99.95th=[31065], 00:36:59.100 | 99.99th=[31065] 00:36:59.100 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2647.58, stdev=61.13, samples=19 00:36:59.100 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:36:59.100 lat (msec) : 20=0.48%, 50=99.52% 00:36:59.100 cpu : usr=98.87%, sys=0.83%, ctx=12, majf=0, minf=29 00:36:59.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252908: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10004msec) 00:36:59.100 slat (nsec): min=5552, max=91097, avg=21216.99, stdev=14162.94 00:36:59.100 clat (usec): min=4255, max=40559, avg=23925.83, stdev=2500.44 00:36:59.100 lat (usec): min=4261, max=40579, avg=23947.04, stdev=2501.63 00:36:59.100 clat percentiles (usec): 00:36:59.100 | 1.00th=[15270], 5.00th=[20841], 10.00th=[23462], 20.00th=[23725], 00:36:59.100 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.100 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[26346], 00:36:59.100 | 99.00th=[32637], 99.50th=[34341], 99.90th=[40633], 99.95th=[40633], 00:36:59.100 | 99.99th=[40633] 00:36:59.100 bw ( KiB/s): min= 2432, max= 2752, per=4.11%, avg=2626.53, stdev=101.23, samples=19 00:36:59.100 iops : min= 608, max= 688, avg=656.63, stdev=25.31, samples=19 00:36:59.100 lat (msec) : 10=0.27%, 20=3.81%, 50=95.92% 00:36:59.100 cpu : usr=98.93%, sys=0.68%, ctx=67, majf=0, minf=22 00:36:59.100 IO depths : 1=5.4%, 2=11.0%, 4=22.9%, 8=53.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.100 issued rwts: total=6638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=3252909: Tue Nov 26 19:27:14 2024 00:36:59.100 read: IOPS=678, BW=2716KiB/s (2781kB/s)(26.6MiB/10013msec) 00:36:59.100 slat (nsec): min=5658, max=86498, avg=14775.66, stdev=11955.37 00:36:59.101 clat (usec): min=9092, max=40239, avg=23441.19, stdev=3427.12 00:36:59.101 lat (usec): min=9109, max=40256, avg=23455.97, stdev=3428.39 00:36:59.101 clat percentiles (usec): 00:36:59.101 | 1.00th=[13304], 5.00th=[16188], 10.00th=[19006], 20.00th=[23462], 00:36:59.101 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:59.101 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[28443], 00:36:59.101 | 99.00th=[35390], 99.50th=[37487], 99.90th=[39584], 99.95th=[40109], 00:36:59.101 | 99.99th=[40109] 00:36:59.101 bw ( KiB/s): min= 2528, max= 3056, per=4.25%, avg=2716.80, stdev=115.80, samples=20 00:36:59.101 iops : min= 632, max= 764, avg=679.20, stdev=28.95, samples=20 00:36:59.101 lat (msec) : 10=0.47%, 20=11.67%, 50=87.86% 00:36:59.101 cpu : usr=98.63%, sys=0.90%, ctx=64, majf=0, minf=29 00:36:59.101 IO depths : 1=3.8%, 2=7.8%, 4=17.6%, 8=61.6%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:59.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.101 complete : 0=0.0%, 4=92.1%, 8=2.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.101 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.101 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.101 00:36:59.101 Run status group 0 (all jobs): 00:36:59.101 READ: bw=62.4MiB/s (65.4MB/s), 2639KiB/s-2762KiB/s (2703kB/s-2828kB/s), io=627MiB (657MB), run=10003-10049msec 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:59.101 19:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 bdev_null0 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 [2024-11-26 19:27:15.041779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 bdev_null1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.101 { 00:36:59.101 "params": { 00:36:59.101 "name": "Nvme$subsystem", 00:36:59.101 "trtype": "$TEST_TRANSPORT", 00:36:59.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.101 "adrfam": "ipv4", 00:36:59.101 "trsvcid": "$NVMF_PORT", 00:36:59.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.101 "hdgst": ${hdgst:-false}, 00:36:59.101 "ddgst": ${ddgst:-false} 00:36:59.101 }, 00:36:59.101 "method": "bdev_nvme_attach_controller" 00:36:59.101 } 00:36:59.101 EOF 00:36:59.101 )") 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:59.101 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.102 { 00:36:59.102 "params": { 00:36:59.102 "name": "Nvme$subsystem", 00:36:59.102 "trtype": "$TEST_TRANSPORT", 00:36:59.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.102 "adrfam": "ipv4", 00:36:59.102 "trsvcid": "$NVMF_PORT", 00:36:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.102 "hdgst": ${hdgst:-false}, 00:36:59.102 "ddgst": ${ddgst:-false} 00:36:59.102 }, 00:36:59.102 "method": "bdev_nvme_attach_controller" 00:36:59.102 } 00:36:59.102 EOF 00:36:59.102 )") 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:59.102 "params": { 00:36:59.102 "name": "Nvme0", 00:36:59.102 "trtype": "tcp", 00:36:59.102 "traddr": "10.0.0.2", 00:36:59.102 "adrfam": "ipv4", 00:36:59.102 "trsvcid": "4420", 00:36:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.102 "hdgst": false, 00:36:59.102 "ddgst": false 00:36:59.102 }, 00:36:59.102 "method": "bdev_nvme_attach_controller" 00:36:59.102 },{ 00:36:59.102 "params": { 00:36:59.102 "name": "Nvme1", 00:36:59.102 "trtype": "tcp", 00:36:59.102 "traddr": "10.0.0.2", 00:36:59.102 "adrfam": "ipv4", 00:36:59.102 "trsvcid": "4420", 00:36:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:59.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:59.102 "hdgst": false, 00:36:59.102 "ddgst": false 00:36:59.102 }, 00:36:59.102 "method": "bdev_nvme_attach_controller" 00:36:59.102 }' 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:59.102 19:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.102 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:59.102 ... 00:36:59.102 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:59.102 ... 00:36:59.102 fio-3.35 00:36:59.102 Starting 4 threads 00:37:04.386 00:37:04.386 filename0: (groupid=0, jobs=1): err= 0: pid=3255660: Tue Nov 26 19:27:21 2024 00:37:04.386 read: IOPS=2882, BW=22.5MiB/s (23.6MB/s)(113MiB/5001msec) 00:37:04.386 slat (usec): min=5, max=186, avg= 8.54, stdev= 5.18 00:37:04.386 clat (usec): min=1392, max=5681, avg=2752.87, stdev=194.85 00:37:04.386 lat (usec): min=1398, max=5716, avg=2761.42, stdev=195.16 00:37:04.386 clat percentiles (usec): 00:37:04.386 | 1.00th=[ 2311], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2704], 00:37:04.386 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:04.386 | 70.00th=[ 2769], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2999], 00:37:04.386 | 99.00th=[ 3556], 99.50th=[ 3949], 99.90th=[ 4621], 99.95th=[ 5080], 00:37:04.386 | 99.99th=[ 5211] 00:37:04.386 bw ( KiB/s): min=22701, max=23360, per=24.91%, avg=23071.67, stdev=214.86, samples=9 00:37:04.386 iops : min= 2837, max= 2920, avg=2883.89, stdev=26.99, samples=9 00:37:04.386 lat (msec) : 2=0.21%, 4=99.40%, 10=0.39% 00:37:04.386 cpu : usr=96.74%, sys=2.98%, ctx=9, majf=0, minf=71 00:37:04.386 IO depths : 1=0.1%, 2=0.1%, 4=72.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 issued rwts: total=14414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.386 filename0: (groupid=0, jobs=1): err= 0: pid=3255661: Tue Nov 26 19:27:21 2024 00:37:04.386 read: IOPS=2891, BW=22.6MiB/s (23.7MB/s)(113MiB/5002msec) 00:37:04.386 slat (nsec): min=7993, max=74728, avg=9299.91, stdev=3350.14 00:37:04.386 clat (usec): min=1753, max=4482, avg=2741.77, stdev=171.48 00:37:04.386 lat (usec): min=1773, max=4491, avg=2751.07, stdev=171.72 00:37:04.386 clat percentiles (usec): 00:37:04.386 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2704], 00:37:04.386 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:04.386 | 70.00th=[ 2769], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2966], 00:37:04.386 | 99.00th=[ 3392], 99.50th=[ 3621], 99.90th=[ 4178], 99.95th=[ 4293], 00:37:04.386 | 99.99th=[ 4424] 00:37:04.386 bw ( KiB/s): min=22976, max=23424, per=25.01%, avg=23157.33, stdev=141.08, samples=9 00:37:04.386 iops : min= 2872, max= 2928, avg=2894.67, stdev=17.64, samples=9 00:37:04.386 lat (msec) : 2=0.25%, 4=99.54%, 10=0.21% 00:37:04.386 cpu : usr=96.34%, sys=3.36%, ctx=18, majf=0, minf=85 00:37:04.386 IO depths : 1=0.1%, 2=0.1%, 4=71.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 issued rwts: total=14464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.386 filename1: (groupid=0, jobs=1): err= 0: pid=3255662: Tue Nov 26 19:27:21 2024 00:37:04.386 read: IOPS=2877, BW=22.5MiB/s (23.6MB/s)(112MiB/5002msec) 00:37:04.386 slat (nsec): min=5488, max=63038, avg=8029.68, stdev=3395.41 00:37:04.386 clat (usec): min=1548, max=6854, avg=2758.96, stdev=221.86 00:37:04.386 lat (usec): min=1553, max=6885, avg=2766.99, stdev=221.95 00:37:04.386 clat percentiles (usec): 00:37:04.386 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2704], 00:37:04.386 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:04.386 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3032], 00:37:04.386 | 99.00th=[ 3589], 99.50th=[ 3752], 99.90th=[ 4359], 99.95th=[ 6587], 00:37:04.386 | 99.99th=[ 6652] 00:37:04.386 bw ( KiB/s): min=22704, max=23264, per=24.89%, avg=23052.44, stdev=207.04, samples=9 00:37:04.386 iops : min= 2838, max= 2908, avg=2881.56, stdev=25.88, samples=9 00:37:04.386 lat (msec) : 2=0.25%, 4=99.46%, 10=0.28% 00:37:04.386 cpu : usr=96.50%, sys=3.24%, ctx=7, majf=0, minf=135 00:37:04.386 IO depths : 1=0.1%, 2=0.2%, 4=72.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 issued rwts: total=14391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.386 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.386 filename1: (groupid=0, jobs=1): err= 0: pid=3255663: Tue Nov 26 19:27:21 2024 00:37:04.386 read: IOPS=2926, BW=22.9MiB/s (24.0MB/s)(114MiB/5001msec) 00:37:04.386 slat (usec): min=5, max=158, avg= 7.70, stdev= 4.61 00:37:04.386 clat (usec): min=673, max=5076, avg=2714.85, stdev=318.20 00:37:04.386 lat (usec): min=679, max=5106, avg=2722.54, stdev=318.48 00:37:04.386 clat percentiles (usec): 00:37:04.386 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2573], 00:37:04.386 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:04.386 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 3392], 00:37:04.386 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4424], 99.95th=[ 4490], 00:37:04.386 | 99.99th=[ 4686] 00:37:04.386 bw ( KiB/s): min=22832, max=24016, per=25.19%, avg=23328.00, stdev=386.41, samples=9 00:37:04.386 iops : min= 2854, max= 3002, avg=2916.00, stdev=48.30, samples=9 00:37:04.386 lat (usec) : 750=0.03% 00:37:04.386 lat (msec) : 2=0.56%, 4=98.63%, 10=0.78% 00:37:04.386 cpu : usr=97.32%, sys=2.44%, ctx=9, majf=0, minf=125 00:37:04.386 IO depths : 1=0.1%, 2=0.2%, 4=67.4%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.386 issued rwts: total=14634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.387 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.387 00:37:04.387 Run status group 0 (all jobs): 00:37:04.387 READ: bw=90.4MiB/s (94.8MB/s), 22.5MiB/s-22.9MiB/s (23.6MB/s-24.0MB/s), io=452MiB (474MB), run=5001-5002msec 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 00:37:04.387 real 0m24.611s 00:37:04.387 user 5m17.135s 00:37:04.387 sys 0m4.555s 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 ************************************ 00:37:04.387 END TEST fio_dif_rand_params 00:37:04.387 ************************************ 00:37:04.387 19:27:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:04.387 19:27:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.387 19:27:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 ************************************ 00:37:04.387 START TEST fio_dif_digest 00:37:04.387 ************************************ 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 bdev_null0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.387 [2024-11-26 19:27:21.569842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:04.387 { 00:37:04.387 "params": { 00:37:04.387 "name": "Nvme$subsystem", 00:37:04.387 "trtype": "$TEST_TRANSPORT", 00:37:04.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:04.387 "adrfam": "ipv4", 00:37:04.387 "trsvcid": "$NVMF_PORT", 00:37:04.387 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:04.387 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:04.387 "hdgst": ${hdgst:-false}, 00:37:04.387 "ddgst": ${ddgst:-false} 00:37:04.387 }, 00:37:04.387 "method": "bdev_nvme_attach_controller" 00:37:04.387 } 00:37:04.387 EOF 00:37:04.387 )") 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:04.387 19:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:04.387 "params": { 00:37:04.387 "name": "Nvme0", 00:37:04.387 "trtype": "tcp", 00:37:04.387 "traddr": "10.0.0.2", 00:37:04.387 "adrfam": "ipv4", 00:37:04.387 "trsvcid": "4420", 00:37:04.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.387 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.387 "hdgst": true, 00:37:04.387 "ddgst": true 00:37:04.387 }, 00:37:04.387 "method": "bdev_nvme_attach_controller" 00:37:04.387 }' 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.648 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.649 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.649 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:04.649 19:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.909 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:04.909 ... 00:37:04.909 fio-3.35 00:37:04.909 Starting 3 threads 00:37:17.140 00:37:17.140 filename0: (groupid=0, jobs=1): err= 0: pid=3257068: Tue Nov 26 19:27:32 2024 00:37:17.140 read: IOPS=155, BW=19.4MiB/s (20.4MB/s)(195MiB/10010msec) 00:37:17.140 slat (nsec): min=5911, max=35250, avg=6771.57, stdev=1400.06 00:37:17.140 clat (usec): min=6614, max=94707, avg=19276.40, stdev=18449.28 00:37:17.140 lat (usec): min=6620, max=94714, avg=19283.17, stdev=18449.22 00:37:17.140 clat percentiles (usec): 00:37:17.140 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:37:17.140 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:37:17.140 | 70.00th=[11338], 80.00th=[49021], 90.00th=[51119], 95.00th=[51643], 00:37:17.140 | 99.00th=[91751], 99.50th=[92799], 99.90th=[93848], 99.95th=[94897], 00:37:17.140 | 99.99th=[94897] 00:37:17.140 bw ( KiB/s): min=11264, max=28416, per=19.02%, avg=19891.20, stdev=4683.07, samples=20 00:37:17.140 iops : min= 88, max= 222, avg=155.40, stdev=36.59, samples=20 00:37:17.140 lat (msec) : 10=26.33%, 20=53.37%, 50=2.18%, 100=18.11% 00:37:17.140 cpu : usr=95.74%, sys=4.03%, ctx=12, majf=0, minf=108 00:37:17.140 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.140 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.140 filename0: (groupid=0, jobs=1): err= 0: pid=3257069: Tue Nov 26 19:27:32 2024 00:37:17.140 read: IOPS=332, BW=41.6MiB/s (43.6MB/s)(418MiB/10048msec) 00:37:17.140 slat (nsec): min=8318, max=32884, avg=9172.06, stdev=946.88 00:37:17.140 clat (usec): min=5813, max=50278, avg=8992.99, stdev=2018.86 00:37:17.140 lat (usec): min=5822, max=50288, avg=9002.17, stdev=2018.88 00:37:17.140 clat percentiles (usec): 00:37:17.140 | 1.00th=[ 6652], 5.00th=[ 7177], 10.00th=[ 7373], 20.00th=[ 7767], 00:37:17.140 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9241], 00:37:17.140 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:37:17.140 | 99.00th=[11863], 99.50th=[12256], 99.90th=[49021], 99.95th=[50070], 00:37:17.140 | 99.99th=[50070] 00:37:17.140 bw ( KiB/s): min=39424, max=46848, per=40.88%, avg=42764.80, stdev=1420.44, samples=20 00:37:17.140 iops : min= 308, max= 366, avg=334.10, stdev=11.10, samples=20 00:37:17.140 lat (msec) : 10=74.48%, 20=25.37%, 50=0.09%, 100=0.06% 00:37:17.140 cpu : usr=94.65%, sys=5.11%, ctx=45, majf=0, minf=89 00:37:17.140 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 issued rwts: total=3343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.140 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.140 filename0: (groupid=0, jobs=1): err= 0: pid=3257070: Tue Nov 26 19:27:32 2024 00:37:17.140 read: IOPS=329, BW=41.2MiB/s (43.2MB/s)(414MiB/10047msec) 00:37:17.140 slat (nsec): min=5869, max=36731, avg=6732.57, stdev=1161.23 00:37:17.140 clat (usec): min=6104, max=51577, avg=9081.44, stdev=2123.93 00:37:17.140 lat (usec): min=6111, max=51584, avg=9088.17, stdev=2124.03 00:37:17.140 clat percentiles (usec): 00:37:17.140 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:37:17.140 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:37:17.140 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:37:17.140 | 99.00th=[12125], 99.50th=[12387], 99.90th=[49546], 99.95th=[50594], 00:37:17.140 | 99.99th=[51643] 00:37:17.140 bw ( KiB/s): min=36864, max=46080, per=40.49%, avg=42355.20, stdev=1925.56, samples=20 00:37:17.140 iops : min= 288, max= 360, avg=330.90, stdev=15.04, samples=20 00:37:17.140 lat (msec) : 10=69.47%, 20=30.38%, 50=0.06%, 100=0.09% 00:37:17.140 cpu : usr=94.12%, sys=5.64%, ctx=14, majf=0, minf=209 00:37:17.140 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.140 issued rwts: total=3311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.140 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.140 00:37:17.140 Run status group 0 (all jobs): 00:37:17.140 READ: bw=102MiB/s (107MB/s), 19.4MiB/s-41.6MiB/s (20.4MB/s-43.6MB/s), io=1026MiB (1076MB), run=10010-10048msec 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.140 00:37:17.140 real 0m11.255s 00:37:17.140 user 0m42.844s 00:37:17.140 sys 0m1.809s 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.140 19:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.140 ************************************ 00:37:17.140 END TEST fio_dif_digest 00:37:17.140 ************************************ 00:37:17.140 19:27:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:17.140 19:27:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.140 rmmod nvme_tcp 00:37:17.140 rmmod nvme_fabrics 00:37:17.140 rmmod nvme_keyring 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.140 19:27:32 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:17.141 19:27:32 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:17.141 19:27:32 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3246135 ']' 00:37:17.141 19:27:32 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3246135 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3246135 ']' 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3246135 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246135 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246135' 00:37:17.141 killing process with pid 3246135 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3246135 00:37:17.141 19:27:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3246135 00:37:17.141 19:27:33 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:17.141 19:27:33 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:19.689 Waiting for block devices as requested 00:37:19.689 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:19.689 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:19.689 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.689 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.689 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.689 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:19.950 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:19.950 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:19.950 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:20.210 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.210 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.469 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:20.469 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:20.469 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:20.729 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.729 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.729 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.299 19:27:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.299 19:27:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.299 19:27:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.215 19:27:40 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:23.215 00:37:23.215 real 1m18.827s 00:37:23.215 user 8m2.666s 00:37:23.215 sys 0m22.126s 00:37:23.215 19:27:40 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.215 19:27:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.215 ************************************ 00:37:23.215 END TEST nvmf_dif 00:37:23.215 ************************************ 00:37:23.215 19:27:40 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:23.215 19:27:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.215 19:27:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.215 19:27:40 -- common/autotest_common.sh@10 -- # set +x 00:37:23.215 ************************************ 00:37:23.215 START TEST nvmf_abort_qd_sizes 00:37:23.215 ************************************ 00:37:23.215 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:23.476 * Looking for test storage... 00:37:23.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:23.476 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.477 --rc genhtml_branch_coverage=1 00:37:23.477 --rc genhtml_function_coverage=1 00:37:23.477 --rc genhtml_legend=1 00:37:23.477 --rc geninfo_all_blocks=1 00:37:23.477 --rc geninfo_unexecuted_blocks=1 00:37:23.477 00:37:23.477 ' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.477 --rc genhtml_branch_coverage=1 00:37:23.477 --rc genhtml_function_coverage=1 00:37:23.477 --rc genhtml_legend=1 00:37:23.477 --rc geninfo_all_blocks=1 00:37:23.477 --rc geninfo_unexecuted_blocks=1 00:37:23.477 00:37:23.477 ' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.477 --rc genhtml_branch_coverage=1 00:37:23.477 --rc genhtml_function_coverage=1 00:37:23.477 --rc genhtml_legend=1 00:37:23.477 --rc geninfo_all_blocks=1 00:37:23.477 --rc geninfo_unexecuted_blocks=1 00:37:23.477 00:37:23.477 ' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.477 --rc genhtml_branch_coverage=1 00:37:23.477 --rc genhtml_function_coverage=1 00:37:23.477 --rc genhtml_legend=1 00:37:23.477 --rc geninfo_all_blocks=1 00:37:23.477 --rc geninfo_unexecuted_blocks=1 00:37:23.477 00:37:23.477 ' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:23.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.477 19:27:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.698 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:31.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:31.699 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:31.699 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:31.699 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.699 19:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:37:31.699 00:37:31.699 --- 10.0.0.2 ping statistics --- 00:37:31.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.699 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:37:31.699 00:37:31.699 --- 10.0.0.1 ping statistics --- 00:37:31.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.699 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:31.699 19:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:35.007 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:35.007 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3266600 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3266600 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3266600 ']' 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.007 19:27:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.268 [2024-11-26 19:27:52.217100] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:37:35.268 [2024-11-26 19:27:52.217175] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.268 [2024-11-26 19:27:52.317315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.268 [2024-11-26 19:27:52.371287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.268 [2024-11-26 19:27:52.371338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.268 [2024-11-26 19:27:52.371346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.268 [2024-11-26 19:27:52.371354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.268 [2024-11-26 19:27:52.371360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.268 [2024-11-26 19:27:52.373762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.268 [2024-11-26 19:27:52.373924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:35.268 [2024-11-26 19:27:52.374084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.268 [2024-11-26 19:27:52.374085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.842 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:35.842 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:35.842 19:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:35.842 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:35.842 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.103 19:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.103 ************************************ 00:37:36.103 START TEST spdk_target_abort 00:37:36.103 ************************************ 00:37:36.103 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:36.103 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:36.103 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:36.103 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.104 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.367 spdk_targetn1 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.367 [2024-11-26 19:27:53.456816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.367 [2024-11-26 19:27:53.495993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:36.367 19:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.628 [2024-11-26 19:27:53.669326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:40 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.669376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.676761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:232 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.676793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.677251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:248 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.677271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0021 p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.684729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:472 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.684757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.694876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:816 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.694906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.716742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1424 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.716775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.732769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1928 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.732801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.740753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2176 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.740781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.756753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2656 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.756784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.764740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2904 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.764770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:36.628 [2024-11-26 19:27:53.781213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3456 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:36.628 [2024-11-26 19:27:53.781244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b1 p:0 m:0 dnr:0 00:37:39.953 Initializing NVMe Controllers 00:37:39.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:39.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:39.953 Initialization complete. Launching workers. 00:37:39.953 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11325, failed: 11 00:37:39.953 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2742, failed to submit 8594 00:37:39.953 success 728, unsuccessful 2014, failed 0 00:37:39.953 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:39.953 19:27:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:39.953 [2024-11-26 19:27:56.949250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:39.953 [2024-11-26 19:27:56.949287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0033 p:1 m:0 dnr:0 00:37:39.953 [2024-11-26 19:27:56.957315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:472 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:39.953 [2024-11-26 19:27:56.957338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:37:39.953 [2024-11-26 19:27:56.973309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:888 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:39.953 [2024-11-26 19:27:56.973337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:37:39.953 [2024-11-26 19:27:57.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2784 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:39.953 [2024-11-26 19:27:57.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:39.954 [2024-11-26 19:27:57.068303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3032 len:8 PRP1 0x200004e58000 PRP2 0x0 00:37:39.954 [2024-11-26 19:27:57.068327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0086 p:0 m:0 dnr:0 00:37:40.524 [2024-11-26 19:27:57.435416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:11576 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:40.524 [2024-11-26 19:27:57.435445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:37:41.909 [2024-11-26 19:27:58.917903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:45440 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:41.909 [2024-11-26 19:27:58.917934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0036 p:1 m:0 dnr:0 00:37:42.860 [2024-11-26 19:27:59.852013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:66672 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:42.860 [2024-11-26 19:27:59.852038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00a3 p:1 m:0 dnr:0 00:37:43.121 Initializing NVMe Controllers 00:37:43.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.121 Initialization complete. Launching workers. 00:37:43.121 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8586, failed: 8 00:37:43.121 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7368 00:37:43.121 success 323, unsuccessful 903, failed 0 00:37:43.121 19:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.121 19:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:44.064 [2024-11-26 19:28:01.191744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:114032 len:8 PRP1 0x200004af2000 PRP2 0x0 00:37:44.064 [2024-11-26 19:28:01.191778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:00fd p:0 m:0 dnr:0 00:37:46.610 Initializing NVMe Controllers 00:37:46.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:46.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:46.610 Initialization complete. Launching workers. 00:37:46.610 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42890, failed: 1 00:37:46.610 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2703, failed to submit 40188 00:37:46.610 success 592, unsuccessful 2111, failed 0 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.610 19:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3266600 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3266600 ']' 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3266600 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266600 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266600' 00:37:47.996 killing process with pid 3266600 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3266600 00:37:47.996 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3266600 00:37:48.257 00:37:48.257 real 0m12.103s 00:37:48.257 user 0m49.315s 00:37:48.257 sys 0m2.069s 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:48.257 ************************************ 00:37:48.257 END TEST spdk_target_abort 00:37:48.257 ************************************ 00:37:48.257 19:28:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:48.257 19:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:48.257 19:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.257 19:28:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.257 ************************************ 00:37:48.257 START TEST kernel_target_abort 00:37:48.257 ************************************ 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:48.257 19:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:51.560 Waiting for block devices as requested 00:37:51.560 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:51.829 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:51.829 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:51.829 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:52.099 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:52.099 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:52.099 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:52.099 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:52.364 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:52.364 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:52.623 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:52.623 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:52.623 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:52.882 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:52.882 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:52.882 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:53.147 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:53.407 No valid GPT data, bailing 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:53.407 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:53.407 00:37:53.407 Discovery Log Number of Records 2, Generation counter 2 00:37:53.407 =====Discovery Log Entry 0====== 00:37:53.407 trtype: tcp 00:37:53.407 adrfam: ipv4 00:37:53.407 subtype: current discovery subsystem 00:37:53.407 treq: not specified, sq flow control disable supported 00:37:53.407 portid: 1 00:37:53.407 trsvcid: 4420 00:37:53.407 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:53.407 traddr: 10.0.0.1 00:37:53.407 eflags: none 00:37:53.407 sectype: none 00:37:53.407 =====Discovery Log Entry 1====== 00:37:53.407 trtype: tcp 00:37:53.407 adrfam: ipv4 00:37:53.407 subtype: nvme subsystem 00:37:53.407 treq: not specified, sq flow control disable supported 00:37:53.407 portid: 1 00:37:53.407 trsvcid: 4420 00:37:53.408 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:53.408 traddr: 10.0.0.1 00:37:53.408 eflags: none 00:37:53.408 sectype: none 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.408 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:53.668 19:28:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:56.969 Initializing NVMe Controllers 00:37:56.969 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:56.969 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:56.969 Initialization complete. Launching workers. 00:37:56.969 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68068, failed: 0 00:37:56.969 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68068, failed to submit 0 00:37:56.969 success 0, unsuccessful 68068, failed 0 00:37:56.969 19:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:56.969 19:28:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.275 Initializing NVMe Controllers 00:38:00.275 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:00.275 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:00.275 Initialization complete. Launching workers. 00:38:00.275 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 122349, failed: 0 00:38:00.275 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30846, failed to submit 91503 00:38:00.275 success 0, unsuccessful 30846, failed 0 00:38:00.275 19:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:00.275 19:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.820 Initializing NVMe Controllers 00:38:02.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:02.820 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:02.820 Initialization complete. Launching workers. 00:38:02.820 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146051, failed: 0 00:38:02.820 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36522, failed to submit 109529 00:38:02.820 success 0, unsuccessful 36522, failed 0 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:02.820 19:28:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:03.082 19:28:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:06.384 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:06.384 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:06.645 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:08.565 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:08.566 00:38:08.566 real 0m20.427s 00:38:08.566 user 0m9.878s 00:38:08.566 sys 0m6.196s 00:38:08.566 19:28:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.566 19:28:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:08.566 ************************************ 00:38:08.566 END TEST kernel_target_abort 00:38:08.566 ************************************ 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:08.827 rmmod nvme_tcp 00:38:08.827 rmmod nvme_fabrics 00:38:08.827 rmmod nvme_keyring 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3266600 ']' 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3266600 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3266600 ']' 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3266600 00:38:08.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3266600) - No such process 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3266600 is not found' 00:38:08.827 Process with pid 3266600 is not found 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:08.827 19:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:12.131 Waiting for block devices as requested 00:38:12.131 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:12.131 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:12.393 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:12.393 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:12.393 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:12.655 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:12.655 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:12.655 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:12.917 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:12.917 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:13.179 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:13.179 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:13.179 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:13.439 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:13.439 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:13.439 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:13.700 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.960 19:28:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:13.961 19:28:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.875 19:28:33 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.875 00:38:15.875 real 0m52.693s 00:38:15.875 user 1m4.647s 00:38:15.875 sys 0m19.609s 00:38:15.875 19:28:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.875 19:28:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:15.875 ************************************ 00:38:15.875 END TEST nvmf_abort_qd_sizes 00:38:15.875 ************************************ 00:38:16.137 19:28:33 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:16.137 19:28:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.137 19:28:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.137 19:28:33 -- common/autotest_common.sh@10 -- # set +x 00:38:16.137 ************************************ 00:38:16.137 START TEST keyring_file 00:38:16.137 ************************************ 00:38:16.137 19:28:33 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:16.137 * Looking for test storage... 00:38:16.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:16.137 19:28:33 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:16.137 19:28:33 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:16.137 19:28:33 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:16.137 19:28:33 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:16.137 19:28:33 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.399 19:28:33 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:16.399 19:28:33 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.399 19:28:33 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:16.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.399 --rc genhtml_branch_coverage=1 00:38:16.399 --rc genhtml_function_coverage=1 00:38:16.399 --rc genhtml_legend=1 00:38:16.399 --rc geninfo_all_blocks=1 00:38:16.399 --rc geninfo_unexecuted_blocks=1 00:38:16.399 00:38:16.399 ' 00:38:16.399 19:28:33 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:16.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.399 --rc genhtml_branch_coverage=1 00:38:16.399 --rc genhtml_function_coverage=1 00:38:16.399 --rc genhtml_legend=1 00:38:16.399 --rc geninfo_all_blocks=1 00:38:16.399 --rc geninfo_unexecuted_blocks=1 00:38:16.399 00:38:16.399 ' 00:38:16.399 19:28:33 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:16.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.399 --rc genhtml_branch_coverage=1 00:38:16.399 --rc genhtml_function_coverage=1 00:38:16.399 --rc genhtml_legend=1 00:38:16.399 --rc geninfo_all_blocks=1 00:38:16.399 --rc geninfo_unexecuted_blocks=1 00:38:16.399 00:38:16.399 ' 00:38:16.399 19:28:33 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:16.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.399 --rc genhtml_branch_coverage=1 00:38:16.399 --rc genhtml_function_coverage=1 00:38:16.399 --rc genhtml_legend=1 00:38:16.399 --rc geninfo_all_blocks=1 00:38:16.399 --rc geninfo_unexecuted_blocks=1 00:38:16.399 00:38:16.399 ' 00:38:16.399 19:28:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:16.399 19:28:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:16.399 19:28:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:16.400 19:28:33 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:16.400 19:28:33 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:16.400 19:28:33 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.400 19:28:33 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.400 19:28:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.400 19:28:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.400 19:28:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.400 19:28:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:16.400 19:28:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:16.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BhXYU4HROf 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BhXYU4HROf 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BhXYU4HROf 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BhXYU4HROf 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ySBSPlLuIL 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:16.400 19:28:33 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ySBSPlLuIL 00:38:16.400 19:28:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ySBSPlLuIL 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ySBSPlLuIL 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=3276760 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3276760 00:38:16.400 19:28:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3276760 ']' 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.400 19:28:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:16.400 [2024-11-26 19:28:33.579414] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:38:16.400 [2024-11-26 19:28:33.579492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276760 ] 00:38:16.661 [2024-11-26 19:28:33.672796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.661 [2024-11-26 19:28:33.727781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:17.233 19:28:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.233 [2024-11-26 19:28:34.383908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.233 null0 00:38:17.233 [2024-11-26 19:28:34.415952] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:17.233 [2024-11-26 19:28:34.416389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.233 19:28:34 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.233 19:28:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.494 [2024-11-26 19:28:34.448017] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:17.494 request: 00:38:17.494 { 00:38:17.494 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.494 "secure_channel": false, 00:38:17.494 "listen_address": { 00:38:17.494 "trtype": "tcp", 00:38:17.494 "traddr": "127.0.0.1", 00:38:17.494 "trsvcid": "4420" 00:38:17.494 }, 00:38:17.494 "method": "nvmf_subsystem_add_listener", 00:38:17.494 "req_id": 1 00:38:17.494 } 00:38:17.494 Got JSON-RPC error response 00:38:17.494 response: 00:38:17.494 { 00:38:17.494 "code": -32602, 00:38:17.494 "message": "Invalid parameters" 00:38:17.494 } 00:38:17.494 19:28:34 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:17.494 19:28:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:17.494 19:28:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:17.494 19:28:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:17.494 19:28:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:17.494 19:28:34 keyring_file -- keyring/file.sh@47 -- # bperfpid=3276847 00:38:17.494 19:28:34 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3276847 /var/tmp/bperf.sock 00:38:17.494 19:28:34 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3276847 ']' 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:17.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.495 19:28:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.495 [2024-11-26 19:28:34.516757] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:38:17.495 [2024-11-26 19:28:34.516823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276847 ] 00:38:17.495 [2024-11-26 19:28:34.609748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.495 [2024-11-26 19:28:34.662239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.436 19:28:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.436 19:28:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:18.436 19:28:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:18.436 19:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:18.436 19:28:35 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ySBSPlLuIL 00:38:18.436 19:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ySBSPlLuIL 00:38:18.697 19:28:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:18.697 19:28:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.697 19:28:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BhXYU4HROf == \/\t\m\p\/\t\m\p\.\B\h\X\Y\U\4\H\R\O\f ]] 00:38:18.697 19:28:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:18.697 19:28:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.697 19:28:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:18.957 19:28:36 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ySBSPlLuIL == \/\t\m\p\/\t\m\p\.\y\S\B\S\P\l\L\u\I\L ]] 00:38:18.957 19:28:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:18.957 19:28:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:18.957 19:28:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:18.957 19:28:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:18.957 19:28:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:18.958 19:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.218 19:28:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:19.218 19:28:36 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:19.218 19:28:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:19.218 19:28:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.218 19:28:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.218 19:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.218 19:28:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.479 19:28:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:19.479 19:28:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:19.479 19:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:19.479 [2024-11-26 19:28:36.594429] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:19.479 nvme0n1 00:38:19.739 19:28:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.739 19:28:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:19.739 19:28:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.739 19:28:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:20.000 19:28:37 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:20.000 19:28:37 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:20.000 Running I/O for 1 seconds... 00:38:21.381 19059.00 IOPS, 74.45 MiB/s 00:38:21.381 Latency(us) 00:38:21.381 [2024-11-26T18:28:38.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.381 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:21.381 nvme0n1 : 1.00 19115.86 74.67 0.00 0.00 6684.41 2293.76 14527.15 00:38:21.381 [2024-11-26T18:28:38.594Z] =================================================================================================================== 00:38:21.381 [2024-11-26T18:28:38.594Z] Total : 19115.86 74.67 0.00 0.00 6684.41 2293.76 14527.15 00:38:21.381 { 00:38:21.381 "results": [ 00:38:21.381 { 00:38:21.381 "job": "nvme0n1", 00:38:21.381 "core_mask": "0x2", 00:38:21.381 "workload": "randrw", 00:38:21.381 "percentage": 50, 00:38:21.381 "status": "finished", 00:38:21.381 "queue_depth": 128, 00:38:21.381 "io_size": 4096, 00:38:21.381 "runtime": 1.003774, 00:38:21.381 "iops": 19115.856756600588, 00:38:21.381 "mibps": 74.67131545547105, 00:38:21.381 "io_failed": 0, 00:38:21.381 "io_timeout": 0, 00:38:21.381 "avg_latency_us": 6684.40993398652, 00:38:21.381 "min_latency_us": 2293.76, 00:38:21.381 "max_latency_us": 14527.146666666667 00:38:21.381 } 00:38:21.381 ], 00:38:21.381 "core_count": 1 00:38:21.381 } 00:38:21.381 19:28:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:21.381 19:28:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.381 19:28:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:21.381 19:28:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.381 19:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.642 19:28:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:21.642 19:28:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:21.642 19:28:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:21.642 19:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:21.903 [2024-11-26 19:28:38.918962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:21.903 [2024-11-26 19:28:38.919736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ec50 (107): Transport endpoint is not connected 00:38:21.903 [2024-11-26 19:28:38.920731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ec50 (9): Bad file descriptor 00:38:21.903 [2024-11-26 19:28:38.921734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:21.903 [2024-11-26 19:28:38.921747] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:21.903 [2024-11-26 19:28:38.921753] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:21.903 [2024-11-26 19:28:38.921759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:21.903 request: 00:38:21.903 { 00:38:21.903 "name": "nvme0", 00:38:21.903 "trtype": "tcp", 00:38:21.903 "traddr": "127.0.0.1", 00:38:21.903 "adrfam": "ipv4", 00:38:21.903 "trsvcid": "4420", 00:38:21.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:21.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:21.903 "prchk_reftag": false, 00:38:21.903 "prchk_guard": false, 00:38:21.903 "hdgst": false, 00:38:21.903 "ddgst": false, 00:38:21.903 "psk": "key1", 00:38:21.903 "allow_unrecognized_csi": false, 00:38:21.903 "method": "bdev_nvme_attach_controller", 00:38:21.903 "req_id": 1 00:38:21.903 } 00:38:21.903 Got JSON-RPC error response 00:38:21.903 response: 00:38:21.903 { 00:38:21.903 "code": -5, 00:38:21.903 "message": "Input/output error" 00:38:21.903 } 00:38:21.903 19:28:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:21.903 19:28:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:21.903 19:28:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:21.903 19:28:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:21.903 19:28:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:21.903 19:28:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.903 19:28:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.903 19:28:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.903 19:28:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.903 19:28:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.164 19:28:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:22.165 19:28:39 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.165 19:28:39 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:22.165 19:28:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:22.165 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:22.425 19:28:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:22.425 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:22.425 19:28:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:22.425 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.425 19:28:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:22.685 19:28:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:22.685 19:28:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BhXYU4HROf 00:38:22.685 19:28:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.685 19:28:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:22.685 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:22.946 [2024-11-26 19:28:39.963433] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BhXYU4HROf': 0100660 00:38:22.946 [2024-11-26 19:28:39.963454] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:22.946 request: 00:38:22.946 { 00:38:22.946 "name": "key0", 00:38:22.946 "path": "/tmp/tmp.BhXYU4HROf", 00:38:22.946 "method": "keyring_file_add_key", 00:38:22.946 "req_id": 1 00:38:22.946 } 00:38:22.946 Got JSON-RPC error response 00:38:22.946 response: 00:38:22.946 { 00:38:22.946 "code": -1, 00:38:22.946 "message": "Operation not permitted" 00:38:22.946 } 00:38:22.946 19:28:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:22.946 19:28:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:22.946 19:28:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:22.946 19:28:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:22.946 19:28:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BhXYU4HROf 00:38:22.946 19:28:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:22.946 19:28:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BhXYU4HROf 00:38:23.212 19:28:40 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BhXYU4HROf 00:38:23.212 19:28:40 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.212 19:28:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:23.212 19:28:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.212 19:28:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.212 19:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.555 [2024-11-26 19:28:40.528879] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BhXYU4HROf': No such file or directory 00:38:23.555 [2024-11-26 19:28:40.528899] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:23.555 [2024-11-26 19:28:40.528913] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:23.555 [2024-11-26 19:28:40.528919] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:23.555 [2024-11-26 19:28:40.528925] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:23.555 [2024-11-26 19:28:40.528930] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:23.555 request: 00:38:23.555 { 00:38:23.555 "name": "nvme0", 00:38:23.555 "trtype": "tcp", 00:38:23.555 "traddr": "127.0.0.1", 00:38:23.555 "adrfam": "ipv4", 00:38:23.555 "trsvcid": "4420", 00:38:23.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.555 "prchk_reftag": false, 00:38:23.555 "prchk_guard": false, 00:38:23.555 "hdgst": false, 00:38:23.555 "ddgst": false, 00:38:23.555 "psk": "key0", 00:38:23.555 "allow_unrecognized_csi": false, 00:38:23.555 "method": "bdev_nvme_attach_controller", 00:38:23.555 "req_id": 1 00:38:23.555 } 00:38:23.555 Got JSON-RPC error response 00:38:23.555 response: 00:38:23.555 { 00:38:23.555 "code": -19, 00:38:23.555 "message": "No such device" 00:38:23.555 } 00:38:23.555 19:28:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:23.555 19:28:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:23.555 19:28:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:23.555 19:28:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:23.555 19:28:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:23.555 19:28:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u5Lto8s55t 00:38:23.555 19:28:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.555 19:28:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.861 19:28:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u5Lto8s55t 00:38:23.861 19:28:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u5Lto8s55t 00:38:23.861 19:28:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.u5Lto8s55t 00:38:23.861 19:28:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u5Lto8s55t 00:38:23.861 19:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u5Lto8s55t 00:38:23.862 19:28:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.862 19:28:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.127 nvme0n1 00:38:24.127 19:28:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:24.127 19:28:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.127 19:28:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.127 19:28:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.127 19:28:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.127 19:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.387 19:28:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:24.387 19:28:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:24.387 19:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:24.387 19:28:41 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:24.387 19:28:41 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:24.387 19:28:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.387 19:28:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.387 19:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.649 19:28:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:24.649 19:28:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:24.649 19:28:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.649 19:28:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.649 19:28:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.649 19:28:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.649 19:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.909 19:28:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:24.909 19:28:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:24.909 19:28:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:24.909 19:28:42 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:24.909 19:28:42 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:24.909 19:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.170 19:28:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:25.170 19:28:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u5Lto8s55t 00:38:25.170 19:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u5Lto8s55t 00:38:25.431 19:28:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ySBSPlLuIL 00:38:25.431 19:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ySBSPlLuIL 00:38:25.431 19:28:42 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.431 19:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.691 nvme0n1 00:38:25.691 19:28:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:25.691 19:28:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:25.952 19:28:43 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:25.952 "subsystems": [ 00:38:25.952 { 00:38:25.952 "subsystem": "keyring", 00:38:25.952 "config": [ 00:38:25.952 { 00:38:25.952 "method": "keyring_file_add_key", 00:38:25.952 "params": { 00:38:25.952 "name": "key0", 00:38:25.952 "path": "/tmp/tmp.u5Lto8s55t" 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "keyring_file_add_key", 00:38:25.952 "params": { 00:38:25.952 "name": "key1", 00:38:25.952 "path": "/tmp/tmp.ySBSPlLuIL" 00:38:25.952 } 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "iobuf", 00:38:25.952 "config": [ 00:38:25.952 { 00:38:25.952 "method": "iobuf_set_options", 00:38:25.952 "params": { 00:38:25.952 "small_pool_count": 8192, 00:38:25.952 "large_pool_count": 1024, 00:38:25.952 "small_bufsize": 8192, 00:38:25.952 "large_bufsize": 135168, 00:38:25.952 "enable_numa": false 00:38:25.952 } 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "sock", 00:38:25.952 "config": [ 00:38:25.952 { 00:38:25.952 "method": "sock_set_default_impl", 00:38:25.952 "params": { 00:38:25.952 "impl_name": "posix" 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "sock_impl_set_options", 00:38:25.952 "params": { 00:38:25.952 "impl_name": "ssl", 00:38:25.952 "recv_buf_size": 4096, 00:38:25.952 "send_buf_size": 4096, 00:38:25.952 "enable_recv_pipe": true, 00:38:25.952 "enable_quickack": false, 00:38:25.952 "enable_placement_id": 0, 00:38:25.952 "enable_zerocopy_send_server": true, 00:38:25.952 "enable_zerocopy_send_client": false, 00:38:25.952 "zerocopy_threshold": 0, 00:38:25.952 "tls_version": 0, 00:38:25.952 "enable_ktls": false 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "sock_impl_set_options", 00:38:25.952 "params": { 00:38:25.952 "impl_name": "posix", 00:38:25.952 "recv_buf_size": 2097152, 00:38:25.952 "send_buf_size": 2097152, 00:38:25.952 "enable_recv_pipe": true, 00:38:25.952 "enable_quickack": false, 00:38:25.952 "enable_placement_id": 0, 00:38:25.952 "enable_zerocopy_send_server": true, 00:38:25.952 "enable_zerocopy_send_client": false, 00:38:25.952 "zerocopy_threshold": 0, 00:38:25.952 "tls_version": 0, 00:38:25.952 "enable_ktls": false 00:38:25.952 } 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "vmd", 00:38:25.952 "config": [] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "accel", 00:38:25.952 "config": [ 00:38:25.952 { 00:38:25.952 "method": "accel_set_options", 00:38:25.952 "params": { 00:38:25.952 "small_cache_size": 128, 00:38:25.952 "large_cache_size": 16, 00:38:25.952 "task_count": 2048, 00:38:25.952 "sequence_count": 2048, 00:38:25.952 "buf_count": 2048 00:38:25.952 } 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "bdev", 00:38:25.952 "config": [ 00:38:25.952 { 00:38:25.952 "method": "bdev_set_options", 00:38:25.952 "params": { 00:38:25.952 "bdev_io_pool_size": 65535, 00:38:25.952 "bdev_io_cache_size": 256, 00:38:25.952 "bdev_auto_examine": true, 00:38:25.952 "iobuf_small_cache_size": 128, 00:38:25.952 "iobuf_large_cache_size": 16 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_raid_set_options", 00:38:25.952 "params": { 00:38:25.952 "process_window_size_kb": 1024, 00:38:25.952 "process_max_bandwidth_mb_sec": 0 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_iscsi_set_options", 00:38:25.952 "params": { 00:38:25.952 "timeout_sec": 30 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_nvme_set_options", 00:38:25.952 "params": { 00:38:25.952 "action_on_timeout": "none", 00:38:25.952 "timeout_us": 0, 00:38:25.952 "timeout_admin_us": 0, 00:38:25.952 "keep_alive_timeout_ms": 10000, 00:38:25.952 "arbitration_burst": 0, 00:38:25.952 "low_priority_weight": 0, 00:38:25.952 "medium_priority_weight": 0, 00:38:25.952 "high_priority_weight": 0, 00:38:25.952 "nvme_adminq_poll_period_us": 10000, 00:38:25.952 "nvme_ioq_poll_period_us": 0, 00:38:25.952 "io_queue_requests": 512, 00:38:25.952 "delay_cmd_submit": true, 00:38:25.952 "transport_retry_count": 4, 00:38:25.952 "bdev_retry_count": 3, 00:38:25.952 "transport_ack_timeout": 0, 00:38:25.952 "ctrlr_loss_timeout_sec": 0, 00:38:25.952 "reconnect_delay_sec": 0, 00:38:25.952 "fast_io_fail_timeout_sec": 0, 00:38:25.952 "disable_auto_failback": false, 00:38:25.952 "generate_uuids": false, 00:38:25.952 "transport_tos": 0, 00:38:25.952 "nvme_error_stat": false, 00:38:25.952 "rdma_srq_size": 0, 00:38:25.952 "io_path_stat": false, 00:38:25.952 "allow_accel_sequence": false, 00:38:25.952 "rdma_max_cq_size": 0, 00:38:25.952 "rdma_cm_event_timeout_ms": 0, 00:38:25.952 "dhchap_digests": [ 00:38:25.952 "sha256", 00:38:25.952 "sha384", 00:38:25.952 "sha512" 00:38:25.952 ], 00:38:25.952 "dhchap_dhgroups": [ 00:38:25.952 "null", 00:38:25.952 "ffdhe2048", 00:38:25.952 "ffdhe3072", 00:38:25.952 "ffdhe4096", 00:38:25.952 "ffdhe6144", 00:38:25.952 "ffdhe8192" 00:38:25.952 ] 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_nvme_attach_controller", 00:38:25.952 "params": { 00:38:25.952 "name": "nvme0", 00:38:25.952 "trtype": "TCP", 00:38:25.952 "adrfam": "IPv4", 00:38:25.952 "traddr": "127.0.0.1", 00:38:25.952 "trsvcid": "4420", 00:38:25.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:25.952 "prchk_reftag": false, 00:38:25.952 "prchk_guard": false, 00:38:25.952 "ctrlr_loss_timeout_sec": 0, 00:38:25.952 "reconnect_delay_sec": 0, 00:38:25.952 "fast_io_fail_timeout_sec": 0, 00:38:25.952 "psk": "key0", 00:38:25.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:25.952 "hdgst": false, 00:38:25.952 "ddgst": false, 00:38:25.952 "multipath": "multipath" 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_nvme_set_hotplug", 00:38:25.952 "params": { 00:38:25.952 "period_us": 100000, 00:38:25.952 "enable": false 00:38:25.952 } 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "method": "bdev_wait_for_examine" 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }, 00:38:25.952 { 00:38:25.952 "subsystem": "nbd", 00:38:25.952 "config": [] 00:38:25.952 } 00:38:25.952 ] 00:38:25.952 }' 00:38:25.952 19:28:43 keyring_file -- keyring/file.sh@115 -- # killprocess 3276847 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3276847 ']' 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3276847 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276847 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276847' 00:38:25.952 killing process with pid 3276847 00:38:25.952 19:28:43 keyring_file -- common/autotest_common.sh@973 -- # kill 3276847 00:38:25.952 Received shutdown signal, test time was about 1.000000 seconds 00:38:25.952 00:38:25.952 Latency(us) 00:38:25.952 [2024-11-26T18:28:43.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:25.952 [2024-11-26T18:28:43.165Z] =================================================================================================================== 00:38:25.952 [2024-11-26T18:28:43.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:25.953 19:28:43 keyring_file -- common/autotest_common.sh@978 -- # wait 3276847 00:38:26.212 19:28:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=3278659 00:38:26.212 19:28:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3278659 /var/tmp/bperf.sock 00:38:26.212 19:28:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3278659 ']' 00:38:26.212 19:28:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:26.212 19:28:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.213 19:28:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:26.213 19:28:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:26.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:26.213 19:28:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.213 19:28:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:26.213 19:28:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:26.213 "subsystems": [ 00:38:26.213 { 00:38:26.213 "subsystem": "keyring", 00:38:26.213 "config": [ 00:38:26.213 { 00:38:26.213 "method": "keyring_file_add_key", 00:38:26.213 "params": { 00:38:26.213 "name": "key0", 00:38:26.213 "path": "/tmp/tmp.u5Lto8s55t" 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "keyring_file_add_key", 00:38:26.213 "params": { 00:38:26.213 "name": "key1", 00:38:26.213 "path": "/tmp/tmp.ySBSPlLuIL" 00:38:26.213 } 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "iobuf", 00:38:26.213 "config": [ 00:38:26.213 { 00:38:26.213 "method": "iobuf_set_options", 00:38:26.213 "params": { 00:38:26.213 "small_pool_count": 8192, 00:38:26.213 "large_pool_count": 1024, 00:38:26.213 "small_bufsize": 8192, 00:38:26.213 "large_bufsize": 135168, 00:38:26.213 "enable_numa": false 00:38:26.213 } 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "sock", 00:38:26.213 "config": [ 00:38:26.213 { 00:38:26.213 "method": "sock_set_default_impl", 00:38:26.213 "params": { 00:38:26.213 "impl_name": "posix" 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "sock_impl_set_options", 00:38:26.213 "params": { 00:38:26.213 "impl_name": "ssl", 00:38:26.213 "recv_buf_size": 4096, 00:38:26.213 "send_buf_size": 4096, 00:38:26.213 "enable_recv_pipe": true, 00:38:26.213 "enable_quickack": false, 00:38:26.213 "enable_placement_id": 0, 00:38:26.213 "enable_zerocopy_send_server": true, 00:38:26.213 "enable_zerocopy_send_client": false, 00:38:26.213 "zerocopy_threshold": 0, 00:38:26.213 "tls_version": 0, 00:38:26.213 "enable_ktls": false 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "sock_impl_set_options", 00:38:26.213 "params": { 00:38:26.213 "impl_name": "posix", 00:38:26.213 "recv_buf_size": 2097152, 00:38:26.213 "send_buf_size": 2097152, 00:38:26.213 "enable_recv_pipe": true, 00:38:26.213 "enable_quickack": false, 00:38:26.213 "enable_placement_id": 0, 00:38:26.213 "enable_zerocopy_send_server": true, 00:38:26.213 "enable_zerocopy_send_client": false, 00:38:26.213 "zerocopy_threshold": 0, 00:38:26.213 "tls_version": 0, 00:38:26.213 "enable_ktls": false 00:38:26.213 } 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "vmd", 00:38:26.213 "config": [] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "accel", 00:38:26.213 "config": [ 00:38:26.213 { 00:38:26.213 "method": "accel_set_options", 00:38:26.213 "params": { 00:38:26.213 "small_cache_size": 128, 00:38:26.213 "large_cache_size": 16, 00:38:26.213 "task_count": 2048, 00:38:26.213 "sequence_count": 2048, 00:38:26.213 "buf_count": 2048 00:38:26.213 } 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "bdev", 00:38:26.213 "config": [ 00:38:26.213 { 00:38:26.213 "method": "bdev_set_options", 00:38:26.213 "params": { 00:38:26.213 "bdev_io_pool_size": 65535, 00:38:26.213 "bdev_io_cache_size": 256, 00:38:26.213 "bdev_auto_examine": true, 00:38:26.213 "iobuf_small_cache_size": 128, 00:38:26.213 "iobuf_large_cache_size": 16 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_raid_set_options", 00:38:26.213 "params": { 00:38:26.213 "process_window_size_kb": 1024, 00:38:26.213 "process_max_bandwidth_mb_sec": 0 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_iscsi_set_options", 00:38:26.213 "params": { 00:38:26.213 "timeout_sec": 30 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_nvme_set_options", 00:38:26.213 "params": { 00:38:26.213 "action_on_timeout": "none", 00:38:26.213 "timeout_us": 0, 00:38:26.213 "timeout_admin_us": 0, 00:38:26.213 "keep_alive_timeout_ms": 10000, 00:38:26.213 "arbitration_burst": 0, 00:38:26.213 "low_priority_weight": 0, 00:38:26.213 "medium_priority_weight": 0, 00:38:26.213 "high_priority_weight": 0, 00:38:26.213 "nvme_adminq_poll_period_us": 10000, 00:38:26.213 "nvme_ioq_poll_period_us": 0, 00:38:26.213 "io_queue_requests": 512, 00:38:26.213 "delay_cmd_submit": true, 00:38:26.213 "transport_retry_count": 4, 00:38:26.213 "bdev_retry_count": 3, 00:38:26.213 "transport_ack_timeout": 0, 00:38:26.213 "ctrlr_loss_timeout_sec": 0, 00:38:26.213 "reconnect_delay_sec": 0, 00:38:26.213 "fast_io_fail_timeout_sec": 0, 00:38:26.213 "disable_auto_failback": false, 00:38:26.213 "generate_uuids": false, 00:38:26.213 "transport_tos": 0, 00:38:26.213 "nvme_error_stat": false, 00:38:26.213 "rdma_srq_size": 0, 00:38:26.213 "io_path_stat": false, 00:38:26.213 "allow_accel_sequence": false, 00:38:26.213 "rdma_max_cq_size": 0, 00:38:26.213 "rdma_cm_event_timeout_ms": 0, 00:38:26.213 "dhchap_digests": [ 00:38:26.213 "sha256", 00:38:26.213 "sha384", 00:38:26.213 "sha512" 00:38:26.213 ], 00:38:26.213 "dhchap_dhgroups": [ 00:38:26.213 "null", 00:38:26.213 "ffdhe2048", 00:38:26.213 "ffdhe3072", 00:38:26.213 "ffdhe4096", 00:38:26.213 "ffdhe6144", 00:38:26.213 "ffdhe8192" 00:38:26.213 ] 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_nvme_attach_controller", 00:38:26.213 "params": { 00:38:26.213 "name": "nvme0", 00:38:26.213 "trtype": "TCP", 00:38:26.213 "adrfam": "IPv4", 00:38:26.213 "traddr": "127.0.0.1", 00:38:26.213 "trsvcid": "4420", 00:38:26.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.213 "prchk_reftag": false, 00:38:26.213 "prchk_guard": false, 00:38:26.213 "ctrlr_loss_timeout_sec": 0, 00:38:26.213 "reconnect_delay_sec": 0, 00:38:26.213 "fast_io_fail_timeout_sec": 0, 00:38:26.213 "psk": "key0", 00:38:26.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.213 "hdgst": false, 00:38:26.213 "ddgst": false, 00:38:26.213 "multipath": "multipath" 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_nvme_set_hotplug", 00:38:26.213 "params": { 00:38:26.213 "period_us": 100000, 00:38:26.213 "enable": false 00:38:26.213 } 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "method": "bdev_wait_for_examine" 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }, 00:38:26.213 { 00:38:26.213 "subsystem": "nbd", 00:38:26.213 "config": [] 00:38:26.213 } 00:38:26.213 ] 00:38:26.213 }' 00:38:26.213 [2024-11-26 19:28:43.303902] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:38:26.213 [2024-11-26 19:28:43.303958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278659 ] 00:38:26.213 [2024-11-26 19:28:43.388403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.213 [2024-11-26 19:28:43.416406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.474 [2024-11-26 19:28:43.560523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:27.044 19:28:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.044 19:28:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:27.044 19:28:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:27.044 19:28:44 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:27.044 19:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.304 19:28:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:27.304 19:28:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:27.304 19:28:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.304 19:28:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.304 19:28:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.304 19:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.304 19:28:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.304 19:28:44 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:27.304 19:28:44 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:27.305 19:28:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.305 19:28:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:27.305 19:28:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.305 19:28:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:27.305 19:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.565 19:28:44 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:27.565 19:28:44 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:27.565 19:28:44 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:27.565 19:28:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:27.827 19:28:44 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:27.827 19:28:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:27.827 19:28:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.u5Lto8s55t /tmp/tmp.ySBSPlLuIL 00:38:27.827 19:28:44 keyring_file -- keyring/file.sh@20 -- # killprocess 3278659 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3278659 ']' 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3278659 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278659 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278659' 00:38:27.827 killing process with pid 3278659 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@973 -- # kill 3278659 00:38:27.827 Received shutdown signal, test time was about 1.000000 seconds 00:38:27.827 00:38:27.827 Latency(us) 00:38:27.827 [2024-11-26T18:28:45.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.827 [2024-11-26T18:28:45.040Z] =================================================================================================================== 00:38:27.827 [2024-11-26T18:28:45.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@978 -- # wait 3278659 00:38:27.827 19:28:44 keyring_file -- keyring/file.sh@21 -- # killprocess 3276760 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3276760 ']' 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3276760 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:27.827 19:28:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.827 19:28:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276760 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276760' 00:38:28.088 killing process with pid 3276760 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@973 -- # kill 3276760 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@978 -- # wait 3276760 00:38:28.088 00:38:28.088 real 0m12.093s 00:38:28.088 user 0m29.120s 00:38:28.088 sys 0m2.766s 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.088 19:28:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:28.088 ************************************ 00:38:28.088 END TEST keyring_file 00:38:28.088 ************************************ 00:38:28.088 19:28:45 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:28.088 19:28:45 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:28.088 19:28:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:28.088 19:28:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.089 19:28:45 -- common/autotest_common.sh@10 -- # set +x 00:38:28.350 ************************************ 00:38:28.350 START TEST keyring_linux 00:38:28.350 ************************************ 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:28.350 Joined session keyring: 642788764 00:38:28.350 * Looking for test storage... 00:38:28.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.350 19:28:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:28.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.350 --rc genhtml_branch_coverage=1 00:38:28.350 --rc genhtml_function_coverage=1 00:38:28.350 --rc genhtml_legend=1 00:38:28.350 --rc geninfo_all_blocks=1 00:38:28.350 --rc geninfo_unexecuted_blocks=1 00:38:28.350 00:38:28.350 ' 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:28.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.350 --rc genhtml_branch_coverage=1 00:38:28.350 --rc genhtml_function_coverage=1 00:38:28.350 --rc genhtml_legend=1 00:38:28.350 --rc geninfo_all_blocks=1 00:38:28.350 --rc geninfo_unexecuted_blocks=1 00:38:28.350 00:38:28.350 ' 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:28.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.350 --rc genhtml_branch_coverage=1 00:38:28.350 --rc genhtml_function_coverage=1 00:38:28.350 --rc genhtml_legend=1 00:38:28.350 --rc geninfo_all_blocks=1 00:38:28.350 --rc geninfo_unexecuted_blocks=1 00:38:28.350 00:38:28.350 ' 00:38:28.350 19:28:45 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:28.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.350 --rc genhtml_branch_coverage=1 00:38:28.350 --rc genhtml_function_coverage=1 00:38:28.350 --rc genhtml_legend=1 00:38:28.350 --rc geninfo_all_blocks=1 00:38:28.350 --rc geninfo_unexecuted_blocks=1 00:38:28.350 00:38:28.350 ' 00:38:28.350 19:28:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:28.350 19:28:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.350 19:28:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.612 19:28:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.612 19:28:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.612 19:28:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.612 19:28:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.612 19:28:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.612 19:28:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.612 19:28:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.612 19:28:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:28.612 19:28:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:28.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:28.612 /tmp/:spdk-test:key0 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:28.612 19:28:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:28.612 19:28:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:28.612 /tmp/:spdk-test:key1 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3279102 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3279102 00:38:28.612 19:28:45 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3279102 ']' 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:28.612 19:28:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:28.612 [2024-11-26 19:28:45.735917] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:38:28.612 [2024-11-26 19:28:45.735998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279102 ] 00:38:28.873 [2024-11-26 19:28:45.824113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.873 [2024-11-26 19:28:45.859635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:29.443 [2024-11-26 19:28:46.522789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.443 null0 00:38:29.443 [2024-11-26 19:28:46.554842] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:29.443 [2024-11-26 19:28:46.555205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:29.443 273936743 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:29.443 330901094 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3279428 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3279428 /var/tmp/bperf.sock 00:38:29.443 19:28:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3279428 ']' 00:38:29.443 19:28:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.444 19:28:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.444 19:28:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.444 19:28:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.444 19:28:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:29.444 [2024-11-26 19:28:46.631974] Starting SPDK v25.01-pre git sha1 afdec00e1 / DPDK 24.03.0 initialization... 00:38:29.444 [2024-11-26 19:28:46.632024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279428 ] 00:38:29.704 [2024-11-26 19:28:46.713322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.704 [2024-11-26 19:28:46.742931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.275 19:28:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.275 19:28:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:30.275 19:28:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:30.275 19:28:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:30.535 19:28:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:30.535 19:28:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:30.797 19:28:47 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:30.797 19:28:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:30.797 [2024-11-26 19:28:47.964236] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:31.059 nvme0n1 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:31.059 19:28:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:31.059 19:28:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:31.059 19:28:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.059 19:28:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:31.059 19:28:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@25 -- # sn=273936743 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 273936743 == \2\7\3\9\3\6\7\4\3 ]] 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 273936743 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:31.320 19:28:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:31.320 Running I/O for 1 seconds... 00:38:32.703 24516.00 IOPS, 95.77 MiB/s 00:38:32.703 Latency(us) 00:38:32.703 [2024-11-26T18:28:49.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:32.703 nvme0n1 : 1.01 24516.75 95.77 0.00 0.00 5205.34 2280.11 6717.44 00:38:32.703 [2024-11-26T18:28:49.916Z] =================================================================================================================== 00:38:32.703 [2024-11-26T18:28:49.916Z] Total : 24516.75 95.77 0.00 0.00 5205.34 2280.11 6717.44 00:38:32.703 { 00:38:32.703 "results": [ 00:38:32.703 { 00:38:32.703 "job": "nvme0n1", 00:38:32.703 "core_mask": "0x2", 00:38:32.703 "workload": "randread", 00:38:32.703 "status": "finished", 00:38:32.703 "queue_depth": 128, 00:38:32.703 "io_size": 4096, 00:38:32.703 "runtime": 1.005231, 00:38:32.703 "iops": 24516.752865759212, 00:38:32.703 "mibps": 95.76856588187192, 00:38:32.703 "io_failed": 0, 00:38:32.703 "io_timeout": 0, 00:38:32.703 "avg_latency_us": 5205.344828836141, 00:38:32.703 "min_latency_us": 2280.1066666666666, 00:38:32.703 "max_latency_us": 6717.44 00:38:32.703 } 00:38:32.703 ], 00:38:32.703 "core_count": 1 00:38:32.703 } 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:32.703 19:28:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:32.703 19:28:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:32.703 19:28:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.965 19:28:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:32.965 19:28:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:32.965 19:28:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:32.965 19:28:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:32.965 19:28:49 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.965 19:28:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.965 [2024-11-26 19:28:50.084108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:32.965 [2024-11-26 19:28:50.085002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c59e0 (107): Transport endpoint is not connected 00:38:32.965 [2024-11-26 19:28:50.085999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c59e0 (9): Bad file descriptor 00:38:32.965 [2024-11-26 19:28:50.087001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:32.965 [2024-11-26 19:28:50.087009] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:32.965 [2024-11-26 19:28:50.087014] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:32.965 [2024-11-26 19:28:50.087020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:32.965 request: 00:38:32.965 { 00:38:32.965 "name": "nvme0", 00:38:32.965 "trtype": "tcp", 00:38:32.965 "traddr": "127.0.0.1", 00:38:32.965 "adrfam": "ipv4", 00:38:32.965 "trsvcid": "4420", 00:38:32.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:32.966 "prchk_reftag": false, 00:38:32.966 "prchk_guard": false, 00:38:32.966 "hdgst": false, 00:38:32.966 "ddgst": false, 00:38:32.966 "psk": ":spdk-test:key1", 00:38:32.966 "allow_unrecognized_csi": false, 00:38:32.966 "method": "bdev_nvme_attach_controller", 00:38:32.966 "req_id": 1 00:38:32.966 } 00:38:32.966 Got JSON-RPC error response 00:38:32.966 response: 00:38:32.966 { 00:38:32.966 "code": -5, 00:38:32.966 "message": "Input/output error" 00:38:32.966 } 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@33 -- # sn=273936743 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 273936743 00:38:32.966 1 links removed 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@33 -- # sn=330901094 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 330901094 00:38:32.966 1 links removed 00:38:32.966 19:28:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3279428 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3279428 ']' 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3279428 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:32.966 19:28:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279428 00:38:33.226 19:28:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:33.226 19:28:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:33.226 19:28:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279428' 00:38:33.226 killing process with pid 3279428 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 3279428 00:38:33.227 Received shutdown signal, test time was about 1.000000 seconds 00:38:33.227 00:38:33.227 Latency(us) 00:38:33.227 [2024-11-26T18:28:50.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.227 [2024-11-26T18:28:50.440Z] =================================================================================================================== 00:38:33.227 [2024-11-26T18:28:50.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 3279428 00:38:33.227 19:28:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3279102 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3279102 ']' 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3279102 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279102 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279102' 00:38:33.227 killing process with pid 3279102 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 3279102 00:38:33.227 19:28:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 3279102 00:38:33.489 00:38:33.489 real 0m5.211s 00:38:33.489 user 0m9.699s 00:38:33.489 sys 0m1.440s 00:38:33.489 19:28:50 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.489 19:28:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:33.489 ************************************ 00:38:33.489 END TEST keyring_linux 00:38:33.489 ************************************ 00:38:33.489 19:28:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:33.489 19:28:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:33.489 19:28:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:33.489 19:28:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:33.489 19:28:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:33.489 19:28:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:33.489 19:28:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:33.489 19:28:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.489 19:28:50 -- common/autotest_common.sh@10 -- # set +x 00:38:33.489 19:28:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:33.489 19:28:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:33.489 19:28:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:33.489 19:28:50 -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 INFO: APP EXITING 00:38:41.630 INFO: killing all VMs 00:38:41.630 INFO: killing vhost app 00:38:41.630 WARN: no vhost pid file found 00:38:41.630 INFO: EXIT DONE 00:38:44.931 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:44.931 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:44.931 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:49.139 Cleaning 00:38:49.139 Removing: /var/run/dpdk/spdk0/config 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:49.139 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:49.139 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:49.139 Removing: /var/run/dpdk/spdk1/config 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:49.139 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:49.139 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:49.139 Removing: /var/run/dpdk/spdk2/config 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:49.139 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:49.139 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:49.139 Removing: /var/run/dpdk/spdk3/config 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:49.139 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:49.139 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:49.139 Removing: /var/run/dpdk/spdk4/config 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:49.139 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:49.139 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:49.139 Removing: /dev/shm/bdev_svc_trace.1 00:38:49.139 Removing: /dev/shm/nvmf_trace.0 00:38:49.139 Removing: /dev/shm/spdk_tgt_trace.pid2699375 00:38:49.139 Removing: /var/run/dpdk/spdk0 00:38:49.139 Removing: /var/run/dpdk/spdk1 00:38:49.139 Removing: /var/run/dpdk/spdk2 00:38:49.139 Removing: /var/run/dpdk/spdk3 00:38:49.139 Removing: /var/run/dpdk/spdk4 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2697884 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2699375 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2700226 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2701276 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2701616 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2702688 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2702892 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2703159 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2704301 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2705059 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2705419 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2705733 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2706103 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2706403 00:38:49.139 Removing: /var/run/dpdk/spdk_pid2706733 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2707084 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2707471 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2708542 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2712084 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2712347 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2712686 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2712872 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2713251 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2713582 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2713960 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2713979 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2714339 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2714655 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2714713 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2715046 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2715497 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2715846 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2716222 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2720768 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2726151 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2738249 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2738930 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2744333 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2745061 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2750317 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2757405 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2760747 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2773359 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2784246 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2786358 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2787460 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2808725 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2813563 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2869645 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2876043 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2883236 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2891133 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2891135 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2892141 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2893148 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2894153 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2894828 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2894833 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2895159 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2895180 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2895209 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2896284 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2897285 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2898367 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2898976 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2899092 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2899325 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2900789 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2902357 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2912437 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2947065 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2952495 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2954493 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2956765 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2956934 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2957209 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2957496 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2958258 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2960282 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2961362 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2962070 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2964773 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2965488 00:38:49.140 Removing: /var/run/dpdk/spdk_pid2966199 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2971266 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2977968 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2977969 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2977970 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2982662 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2993553 00:38:49.401 Removing: /var/run/dpdk/spdk_pid2998399 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3005619 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3007118 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3008970 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3010659 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3016200 00:38:49.401 Removing: /var/run/dpdk/spdk_pid3021656 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3026539 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3035845 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3035860 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3040910 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3041240 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3041576 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3041922 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3042054 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3048194 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3048870 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3054615 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3058168 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3065202 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3072042 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3082464 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3090856 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3090863 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3114527 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3115274 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3115952 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3116643 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3117700 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3118387 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3119173 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3120038 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3125143 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3125475 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3132626 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3132901 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3139360 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3144463 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3156653 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3157325 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3162479 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3162933 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3168214 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3175181 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3178396 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3190824 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3201597 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3204183 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3205256 00:38:49.402 Removing: /var/run/dpdk/spdk_pid3224963 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3229686 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3232868 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3240315 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3240395 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3246509 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3248710 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3251003 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3252519 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3255366 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3256705 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3266760 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3267325 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3267994 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3270940 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3271488 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3271965 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3276760 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3276847 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3278659 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3279102 00:38:49.664 Removing: /var/run/dpdk/spdk_pid3279428 00:38:49.664 Clean 00:38:49.664 19:29:06 -- common/autotest_common.sh@1453 -- # return 0 00:38:49.664 19:29:06 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:49.664 19:29:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.664 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:38:49.664 19:29:06 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:49.664 19:29:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.664 19:29:06 -- common/autotest_common.sh@10 -- # set +x 00:38:49.926 19:29:06 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:49.926 19:29:06 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:49.926 19:29:06 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:49.926 19:29:06 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:49.926 19:29:06 -- spdk/autotest.sh@398 -- # hostname 00:38:49.926 19:29:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:49.926 geninfo: WARNING: invalid characters removed from testname! 00:39:16.501 19:29:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:18.412 19:29:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.324 19:29:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:21.708 19:29:38 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.620 19:29:40 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.002 19:29:42 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.913 19:29:43 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:26.913 19:29:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:26.913 19:29:43 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:26.913 19:29:43 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:26.913 19:29:43 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:26.913 19:29:43 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:26.913 + [[ -n 2612502 ]] 00:39:26.913 + sudo kill 2612502 00:39:26.923 [Pipeline] } 00:39:26.939 [Pipeline] // stage 00:39:26.944 [Pipeline] } 00:39:26.959 [Pipeline] // timeout 00:39:26.965 [Pipeline] } 00:39:26.979 [Pipeline] // catchError 00:39:26.984 [Pipeline] } 00:39:27.000 [Pipeline] // wrap 00:39:27.006 [Pipeline] } 00:39:27.019 [Pipeline] // catchError 00:39:27.029 [Pipeline] stage 00:39:27.032 [Pipeline] { (Epilogue) 00:39:27.045 [Pipeline] catchError 00:39:27.047 [Pipeline] { 00:39:27.060 [Pipeline] echo 00:39:27.062 Cleanup processes 00:39:27.068 [Pipeline] sh 00:39:27.358 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:27.358 3292451 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:27.373 [Pipeline] sh 00:39:27.662 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:27.662 ++ grep -v 'sudo pgrep' 00:39:27.662 ++ awk '{print $1}' 00:39:27.662 + sudo kill -9 00:39:27.662 + true 00:39:27.675 [Pipeline] sh 00:39:27.961 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:40.373 [Pipeline] sh 00:39:40.661 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:40.661 Artifacts sizes are good 00:39:40.677 [Pipeline] archiveArtifacts 00:39:40.686 Archiving artifacts 00:39:40.875 [Pipeline] sh 00:39:41.169 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:41.187 [Pipeline] cleanWs 00:39:41.199 [WS-CLEANUP] Deleting project workspace... 00:39:41.199 [WS-CLEANUP] Deferred wipeout is used... 00:39:41.207 [WS-CLEANUP] done 00:39:41.209 [Pipeline] } 00:39:41.227 [Pipeline] // catchError 00:39:41.239 [Pipeline] sh 00:39:41.529 + logger -p user.info -t JENKINS-CI 00:39:41.540 [Pipeline] } 00:39:41.554 [Pipeline] // stage 00:39:41.560 [Pipeline] } 00:39:41.575 [Pipeline] // node 00:39:41.581 [Pipeline] End of Pipeline 00:39:41.619 Finished: SUCCESS